var/home/core/zuul-output/0000755000175000017500000000000015140361547014533 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015140400123015457 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000362632615140377772020305 0ustar corecoreikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB lEڤ펯_ˎ6Ϸ7+%f?ᕷox[o8W5C% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aUiE߳Kf^?·0* TQ0Z%bb oHIl.f/M1FJdl!و4Gf#C2lIw]BPIjfkAubTI *JB4?PxQs# `LK3@g(C U {oLtiGgz֝$,z'vǛVB} eRB0R딏]dP>Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLM)}?=XkLd. yK>"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁weor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635o,j&X}6$=}0vJ{*.Jw *nacԇ&~hb[nӉ>'݌6od NN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Ifq CXReQP2$TbgK ء#AZ9 K>UHkZ;oﴍ8MEDa3[p1>m`XYB[9% E*:`cBCIqC(1&b f]fNhdQvݸCVA/X_]F@?qr7@sON_}ۿ릶ytoy͟מseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPC49 *#LC ءzCwS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזL +3[n )ܗKj/jUSsȕD $([LH%xa1yrO('h=TԫeVިO? )-1 8/%\hC('8'\6"yr+i37Z_j*YLfnYJ~Z~okJX ?A?gU3U;,ד1t7lJ#wՆ;I|p"+I4ˬZcն a.1wXhxDI:;.^m9W_c.4z+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{oɼʪ~75/nQ?s d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(ߐ/ۼ𹫘qݎt7Ym݃|M$ 6.x5 TMXbXj-P\jА޴y$j`ROA"EkuS#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@n|Vo|6 8~J[,o%l%!%tyNO}}=ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&VY+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓIwMme_e1Npr~n ۓ,j|z6OSu;BKŨʐPqO K\{jDiy@}?:MC!n1ns9{9&PVvB,yw $2\) 2m >gKQӦ4}Gn\^=-Y5PI dPN6 Ozځפ|5) F[ڣ$2*%&h v%9HN H~Q+oi?&۳)-nqK?2ސv/3,9ҮT9Cef˝49i.2DxatC<8iR/ƬйR֌vN8J"iJ. T>)qaY4ͬlyg "]BvW#99`TegõII kюHLa^c&/H^FFIu`2a$mc RyR:LڕDܓ>Y:]t.+|PT6=qWe0NƏw<6o3mv8k vGЏeb3aīNLd&@yz\))H;h\ߍ5S&(w9Z,K44|<#EkqTkOtW]﮶f=.*LD6%#-tңx%>MZ'0-bB$ !)6@I<#`L8턻r\Kuz*]}%b<$$^LJ<\HGbIqܢcZW+{jfѐ6 QڣPt[:GfCN ILhbB.*IH7xʹǙMVA*J'W)@9 Ѷ6jىY* 85{pMX+]o$h{KrҎl 5sÁbNW\: "HK<bdYN_Dd)VpA@A i"j<鮗 qwc&dXV0e[g#B4x╙✑3'-i{SEȢbK6}{Ⱥi!ma0o xI0&" 9cT)0ߣ5ڦ==!LgdJΆmΉO]T"DĊKٙ@qP,i Nl:6'5R.j,&tK*iOFsk6[E__4pσw=͠qj@o5iX0v\fk= ;H J~,,||^)Yghi= w_ ޭhd+xӁvGT.+-k)j_J>ɸv'IJ-tH{ "KFnLRH+H6Er$igsϦ>QKwҰ]Mfj8dqV+*/fC Q`B 6כy^SL[bJgW^;zA6hrH#< 1= F8) 򃟤,Od7>WKĉ~b2KQdk6՛tgYͼ#$eooԦ=#&d.09DHN>AK|s:.HDŽ">#%zNEt"tLvfkB|rN`)矺81 &ӭsēj\4iO,H̎<ߥ諵z/f]v2 0t[U;;+8&b=zwɓJ``FiQg9XʐoHKFϗ;gQZg܉^ƥN7OQ.8[ʔh,Rt:p<0-ʁקiߟ/t[A3)i>3Z i򩸉*ΏlA" &:1;O]-wgϊ)hn&i'v"/ͤqr@8!̴G~_6u5/>HB)iYBAXKL =Z@ >lN%hwiiUsIA8Y&=Xuv~4Le͢ }UVM)[A`b}mcE]LCEg=2ȴcmV?E*-8nhױ1xR2ϫCyaӱ` A y!?h!9yL%VLU2gr26A!4vbSG ]ꧧWp/ &ee *w$-`J\ ptǣC^p#_`{ К8EW>*(D{ٛ,[fnY𱹞M=6&$<,"lX-O_whaE 98 (oѢ/Р΅ 7ցl6618ł_1/=fu).s¯?.S[{'g=ڤ):d/8h\y6Cte)n&  $uT{wD]2cM(%YjDktByxVl巳0~npd1O9Á%˧Byd}gs9QNʟ. /ӦxbHHAni5(~p>/O0vEWZ nY3 cU $O,eLacoV1jGU9!Z>gUŞ}xTL̵ F8ՅX/!gqwߑZȖF 3U>gCCY Hsc`% s8,A_R$קQM17h\EL#@>omJ/ŵ_i/ݼGw eIJipFrO{u_py/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|R 2ӗh$cdk?uy̦7]Ạ8ph?z]oW_MqKJ> QA?^"nYG0_8`N 7{Puٽ/}3ymGqF/pB!62)7:ya]N!_ ?4>iwӑ؇n-i3,1׿5'od3(h>1UW蚍R$W޹na4p9/B@Dvܫs;/f֚Znϻ-Rk҃=pnUגZ6p| G;;74^l{Pclwů Հ}xcSu)6fbM/R(*ȴd.^Qw %"=nluOeH=t) Hİd/D!-Ɩ:;v8`vU~Ʉ!hX #'$2j1ܒZ˜bK@*`*#QA 9WykGk,8}B6{/) ݆Y~ 1;;|,ۇ=sxy+@{l/*+E2}`pNU`ZS̯窜qN8V ['4d!FmaX-6 y:1V(!L7,RPEd;)QϢ +RlWDžuF7LFֆoM~ar*EtIbW>jqour?qzJJaQ#-n`/$fhnqgTĔO5 ꐌSYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; K>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;0)N.w7ߍ|+qUߤ^oå~4en\.cY[s'wSSۘf ?.D s}Y~/J[}jX^ޗ_-/̍ݥ*n./cus}]\>\\^'W_nAqC_oO-S_sOq?B}mmK2/@DJt}=xL@5MG0ZY,\S Eb uw:YɊ|ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^6eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j3l૎S1@:G|gRcƈ?H(m>LC,HI~'.Op% ' c*Dp*cj|>z G` |]e*:nq!`{ qBAgPSO}E`́JPu#]' 3N+;fwt[wL X1!;W$*죓Ha-s>Vzk[~S_vD.yΕ`h9U|A܌ЃECTC Tnpצho!=V qy)U cigs^>sgv"4N9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷(ȊD0M^`MDN74Т C>F-}$A:XBgJWq&4ۓflq6TX)ى?Nwg>]dt*?Ű~{N_w7p682~ =WBX"XA:#u-9`x 92$4_>9WvTIj`+C2"s%DƖ|2H\2+AaTaBˮ}L@dr_Wfc>IdA Od[jlec=XJ|&+-T1m8NP$%s,ig\Z:h Ћ߉n!r}_\ \5 6 d#=&X^-kOwĝJO\Vj; )!eoB4F\jtctUb.L[3M8V|&jZz/@7aV),A[5TpUZL_?CU0E [%W%vl x٘3܎y,< )i7 Ո: tC`\?c%v7\Ct!$9iç$><+c~݊lz1H[E'2/clQ.I`AWOlw&5fH n`gMytdx)lwAK~GgbJI-tq5/i ?WǠr^C/1NEU<=co(k0Q~wˌ\g,\ rf\PUH,L#L7E"`0dq@zn~+CX|,l_B'9Dcuu|~z+G q|-bb^HcUha9ce1P[;qsA.Ǎ-]W‹y?ڕ^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?ç&+|t><,BLqL򱷬dS{X6"X#-^䀕#{К4i̎'QIc(<ǩJi lc*n;YKOIXAػq]Wo6fIr< 'CUJ5\k)*"tV3ġuP{>Un꩜TYEfФNNsl'CUd_G0S'|PǣprLL10//a>SpyP].s>>9JC@<8/=/o{%e+FKVa~dKi|1gcƗ/Y ^f`zY\G6WeqỶ?6Ww;oL$~ &/+eʉl b~gm0- Лg{< H7f9ܵo }m0d\5#ϊj$LϏ4ZFUo*4 s`U;|4Z~6ϖ>g&k)E3ZuA K@d&qsi_y|`;g6)B፜'"kG3l.w"J$?U讼eǘbeY~V{67Rd2}A":(Ko~3YWM/Q5"+߫gy),ij26 -߫KiDD e5OT`Xf0{ ŶКgqQrBXpHaa&f'#:eێa̕NoE}̏I6<# ܮ4|=(ˣ*%ledLF8m@U!GTd{HzU0 B92/ cŚQse4̳y"(BSRfq9e!XŠeuS"j kP*hl7o,XYP*Y^f`,|Kb9KoQV 08R + ,5kR%G003YTE+I,<[@ /K x㝥iZW1=P~ŀŀ/(䚞s9U -,pp|~* 4 o?<)2 '%=.G_ C3Z{o _W߃ǔ2\8AF9dЈR2)q*Tv9ڞ?pl+"%Q?OađG6)ӴYw ¿W⌄vFAUWYq9V9#DZ&ACy= Ng4 yi j i5*"qLwg@E?~~瘄y;c9+dx ;Vq-:2SػC/Yt jqw5#*N_O6 N2W2<j`#2ΪcVa]y q&XQ|)iuu^'U鴩%L\cE± f eIQ>}a=8m}L@2QIOD^tH0zvHf[bg$֞ U%$0byW0H* ]g$pv7ا: ŪtB*#8 j+X&`,?dׇr%޵W3:Y]pK@*dAE 4ӂ~qrXH pwd_7<'"e%|슰Ҁn´/z4$bw$`dUS fE.` %rk[E/yrD#*ݜfGpǓ>9vrqvPM,1 R9U,&l|A]Pɫc>! c/ʼ]bqU(9?c1s0$07V0=\eQʎbe4ʇpLB׭sI@Qڝ hHPޭqOvLȼr٭q1Bu>^q8&v_iB0Sђcl&kt|֮,Th]}@9Kcڙ&IBX(%9ITo%0Hђ?~ q۹c;~S2f3\r!1ixYVo⼰d^J7 "y]ye=$ٍy0o0q puќ;fIC+X# Ir%oUFO!5(\(;:ǦL29}|4ʋ,U]dT4#|8$AqSA)+=#T2ݾFCMU0.W.cmё`:YڇM\aoEQ8>"U-Ɨa9tA` ^`4jQxX0mC+lsSk0#L^-e;ʚ1YC| hZJݗdˬyJ.rsySrLn /z4#-w[wovK<= 4uhmu" kD;WČTUFR 3eo m@<9>\uymQi|C=\k BgainNt }܊l{>}[g2R]l:AO/,Ds|W3I]=lMڗ")mC /D6n(TH_o@g7 <Ќg2{KbMikZ4ꖻ-Ds*׶%wBU!2nxLy#u;<{fe[`"/:;y<6Wߓu+m4+J*'ij:s$q{D}dt{@ -(W27Pcf*uI%J%UF1P6v=ۊ40ftfT3,C4:"dWtiN ZBϷ[4Ms5V*2}XQdž{2{H=b[ItA-pV+ Ծ&uDe[!Xlقڲ(Ql}Λݬp?iɯQkTu'w:@2#.dwѹMFzns@$Qqm< (ٺ/\GD12Qp ǵ\y-O؋-xu?ϺEwuPnv0]n.֎arW+Nr,1K 7.QW AdCs-KG<#UVF]$D+VuZ+xu$ u6XohD9 cDRB:nt#ymaJ7ǶVA$!EYqU1gmԁE<F\"hcM1M*1֐yb,oȈW7DZ% Em6ȱ6xz}}>d8<.fAF.@++ hFyNCYuגřff(hPO#; ԃգѭݽXQcjLZڶ]OYHr[DLo"HzXKN8Ķe(L/i.-ʛ!5= AXZ㐾q\ rVn=qs_VU*pAhCzvnLޗkI 24YEچ]ɮC$!p%X3I{|Nu֮ᵿ[_-L":nm^2Ɗ^eK";ZyMI={KbGO%hU4/R>C>KUxC]r^]j.M!n{ڱZ\Y^hYm2,!,UQ6FizAVW2 l/^ε41.$a㎛BG{og|3}I2GA"ti;lQ?_ Sx4Sw,2<FڶDXOܝXg]w%n%iS!g sږJ!Q&͙Yti& 6Hu.^qenG6`>5򙚫6{lysӚ4lldKTIg;0QqnEyet2"H.o{5R#UgI6Ftw ϵ2n%|E 2z8ڷj@!LBoC(aoцB{G[أv#=i LG0='w߻/Y, `]^clz f1ByvcC"6NSBϞ;`ڧ: )(9=j?O+,|-Fͤg',B^JcaeB{oIkanÙJx=vzZ'vLP$eD]Rc3{@JlAm3qL8/x/A}{Iǰr6G^ciѕ3+NCrh0EC KfltɵFrJ $ѝ5PgAY_ETg7N1iV?W`V>Sq]6S>IaaՅ>`nrl_Mf*X˞WYI!ISG=B[۟oUEPݯSEqBXhډ"W<8kg d *%*}`Ÿ}rLtO=0X[!wPg6q t(nGs; X|bj֎/}KstЋP3f[@k՚ `wwA@LGQ:xd`hh"PBkWQ;qHLlϡCHa" :X6;]]э#.9ykpX~&w<e`i f#3-FP @8 / PH*1pl/ (%w7d`jk@`BԂqKWˁ\M9 (v% tƳt… phhP}R03v# Ba9u 8pl8}?^@>L.)@@qv0NGv$x Th+@``Ly8e`Vi9[0Ěf$#B  [\SyEC~t6mcrzd{s)N&IuIGĉ/N@E 7MwmL"3מ& 3J`xlus$0n Q,43v?faB9^P]瓫ѤuY6+=kx*ԿBY #&pꗂ RiL|$L #B)6-ýu5M`vcTב8_T}}R>kX_7_}"̒)AQ D9l5B g:* QeEF%woVէOǀtt؊]E[z![N?` 'G䚑RizbE j:1NڗĖ{sn~Dv|4% aoAKpz ' ՟ *ܣ?O94:{. .rxZXЙ q!s |~%L QpΆv}߳A2p`8tR?g D fAlސBpŔ1ŚAj3 ,EQ q ]kGtB /եFWe-tzHFǻk=q. ]Ѝ σc!ڙ7#7MB*pyC X) 6q@nŹf]u 5z㩔 oME@ YR:"W1@xN\:KthRL&PB$vv}?MQ c,hFBPdp(8UuSFq#uua;j{BUN)U{0Zm@Q"GzكidQjo=b+8 @DnGpiӕhQXו]I |"iW;mGb[[٭]w .)Jlc7Wh_l12h)PQ;<˞OP[8;$ ^|* NMR/??a>j_V e[ʶ'=P#lB׆m3lAeOU밲u}`]cn3|Z|{ZH;Rʷ T* TlOxbGB::[lO4B u ]%݂Pw{BݧH*z=PoGB-W ߞPi;oAhJhO#4ؑ` BUB- '4|ᎄK^gK }Ah󤸙wVBh?W.&(LcՃu^%N@nU1@`ީ*R=tZQaʵM_ ǔviu]D6QcN{.^6 /m]xF'|﫦) Fe^e|2WgaA/tzhP\Cuh VɨzT_Ӌ,ICK.8EӲHXZYC[ǙE?ϫ,ş'e^ekNVI3"[5CV5fU -1:£?LtU Il v#,A ~YD9YM{_,||+ʪ#U$IU4j'Y?UQ j( PqM[)|O+0|*_7R,-k"E)eGp=պi9~ Ƹ 8ͤ2U,iQ|xm9+dXIPwִHeZd>w┎lz2^ n,1=ĝ a|dVz ) 5q׬>\x[A94 ޜD?0#vU^㗒g‚>i9_fug5E3 'SE-&aæ+1ҒB2fç!{ɜ^YpjdϮ*BpPꡧV&r]9Tf9ʻb! Ӟu,fQ>ߐ,E 1G}e8:([m,F;Zj-;J5SiGX5p38hLHI"ϐjN#Y`f輒I`ɧ FyȇhV/7Z:G\~ ZRr)Y2eU*::% I aB4Yeiw:\.oAKd\R=v N2qWz m)L肫XxC.p?MũS8JQ;sxh̓$4ɤzMM1L=̆#)36Djړ7ȪPeS4"C ok;ZḒ\KZ(=1t)\z 8?0RCYf5xS1;:kG^uvjIwt7΀x~q^iH O<I=,X-%6i7r: ,_-/:^x7߶/8FkYdj,nV5ZRLc`xƾo)lWplZ!ܔƈjf˓ $_1ɰ2ʒ.Ӏ8^yTIM\gkq,=,*x{O -BҺCBCͨD6%R6]&2И|Rؕp땇y@W8dTF(Y>졕9TcT'ЃE[+=0^p5 pqw2.R猌\oϚOHE-TabFjwIBM~z0qq~d#U6CrIKt>BMWЪ_R JPǗe,`2M{ѥ0Kek0ƪ܀"xbFq%f\;!4E3ppt?-c% Ǜ3ȘS2Q%viGWlG= _8_ ݮ{M^f酲 ҋi1|;T%kˢ9['Jī޼--Y^feC4 u Y4#:|L<;01aXN_u6Xц8s?|fX^6lŃw]^_xH^uqe{X?u՘w';M)q1o9%Jc3B?)6 ˫s4kY/#Ʈuғh#Hk+w!sS|A4_>Jſ;'zY6nB=Ulp'?mS!Yg?yÓd->,MV?y[MES>lE ubK[.j~ܦ_JMJ1 &c 1#A}1ݓډ x^Xm#[`&PU{ ?g5yQtW:3ɭUCTN хIKQ$Qc^8 .9,d:ARzTX_ I'˧xۥoX:eB` x Zl*pmj1D(NCĝ/4οu֜^>9CF3So{ cQ5:H ܹiY%ւ|} ?DsnfөWGD98*CZĤ0+" _&4/U ΎL#L(Un6\<tG:m!Y@d0"n94*E҉/cUiI%B6qI(b`e;{Eε@YS@σG)W09kR3I]"<}>3,PBrm9Dϻ~tx̋-L+:)" m'J 6Y y:9M)N1՘\j?zRϏ9-FqsWHqni\cFRkzilv ï5:,82-"R;\Gty;׾ʷ_+8e1.۞0s) 'cau5E2㘢 j :v8C]<Jw?EL4cqϦ[8U 3+=LT 8k7YURFP$mQzRPc5)"qU98`ZT"#l rsk E*kKUTve4@jH#2jNB)Kd'H:\8YE F 96BRu1iEGWA ϟY_>ZR}9]uhEsˇܵװ%b^Oʦ~I _T][UI=QxvK;x ;sYó?G7,W-Lh&R?wj@:eTQq^ cԡ(&1#twjddwW1::W-FBи D< =XQSs CcA6>S[4QֺqýQm,Q,~zͥp(zހH.bijfֵVYmsU*P"_d.8˚ =f(c EeR g8׫Qy \9fe*Q0<"c, 16H:׹&ˢb>)IӬ{Ի47O[59cR$"Rxخn}|Ϧ[frGH9~"-@w[Cpp`J785ZnJk,\r&TLt"5}}ܹuGIuoqq\vb]BH[uKYn%d¬ UM47}.Eo !'gnKU>F% DUV7(E{8('꬞_Emu:ei.Pඏsl4bB:?x A}8LK8q}/w=h8zAE& نsf+,)\Rό/J1qRt옰U:K/K`7>o׵ >6`#׍ x*(KBYE3׫%!/7eۛQ .kfXoYmaiYm1(8tɅS`űhC_J} \کCht@A)SXX.&K]o*kM9 IUdtTdq΋wr=Y598qn!0߿!EJǩĹhǕx؝k嗦+.@4J+wS$ 3z,p<ɤطc3s#j7R ~ߛ/xf`11@U.T%Eܥ<`Up\eQ_f+V]V|6GI7>q4չsL2/w(n~c8#sV:H֫!d0"n9Ķ2۸J]JMحQ%BPI'7u5 t-c $T4c2)F6_qg Rv}6Trh_xЗ1Pi3x y?Pz(bzIRc' 1"2tۼ}那\1/ahKyăڇ=AkvQ6y2O1osMDyFwtKqmj.vfG[ۤ|$v(m]98<U $V\k7.^tߗt:D.sEr4չ{$}w.=Ydž&VEDZ޳o=vg(?{ưFqރOO]ѴVҺl [vq0չʳu ~哙T%8:/j0Orb4uF7"Z}uxYB}hVI}{S=^F\6G!:5.8nLٯO៛$qhKzƃw `sWh;$,+_˄Xm[+lotl7m'M`[Vz;HYlG)ɢ6c" ~89B3p9Yw5Jڝ2uY|us1 Msg=y>d~WyV$\G?O;q랛\]RO]GoK7oF6 q 5j$cBP4*VF(783kt?N=s>C;W=0` ͘?0.rIEZ,N1nσMӪgiS]8u=M ;X@G鱶atR+zS6 `XIһfU/o^ 1$DVs1O*ꥄz_%vk"ւV[u\q^bZZ!ֆ}'$\ z^LZ%UrSïUz1e'8XI5dqul0x7Ͳ3U#1ݽo/1<1f2G|_l"F;^Ժ!<(kA?:P^n:Z#uwun:s8<6FEIml䚋G~uO`:}.FB򦲔G ^Y8Z%MMx ?O9{gY4x1$jݔ}hAKkpnN>ˤ AweQ0Yص(+s# wUrߏ̄;"P7 ( b#>y >=S{?I:ɨ螞x ܖ܋] 'U,džr|m/3W@n_@ 07gM4`2*Dהn0?xF[OM䫛 ;@\AS<4!%!pZۿ_cL-அ]EgsJt4r׏=CP]c~ulpi.u|4 5N ^?i_UyS`W`M'ɥ^r'GaCMoryMpwn B8dM{5Y˚^ы`F^ Q]&q$--W>D-oa43>v-I`h5L)*# n|\'xga?1S}weahi*1 ezYncxwv͚[?V/9qD#0QsM~cnŬ5N?zܺ\dbRs1N s*T]r*2|xUط)"5A^)B)ܛLWg '}aq5davp0h &_WjW7e7CN3ET]Y 7&%v%aE'ߪsG  lo*/*X#rmɶi1|Hw/_vҗ^9xX3A5y6Qm*UxYfs!a4f ._͞,uwǐfSXw|(n|zׅoM5=ԑ3XF΂Ż>rĭ>r*,[ѹIދ$aFR@ʚAt"$o|I刼yʩ\ujI:GUTAf- *l6+lSIZ$-Dy;1~n#Y%o`*']V3ۄ/G=<˙V!'Vq4_60j EEL(Z Z(ĵdrÆ#ވ6hDF nQ|)U۴A &mPH[ Z6ik:ucm-ۡ^sҰ(ԊxIq.G^ ,[ti^ΓФ\ʣjB$=o ?bR%^/RfGljUd\oCFp:9èh>Y!oB4hj @`Ѣ*WX`-Z=Prɾfk6SljGV똑ȲŠ_ϒ&KN:R(!H? ;0cX2Ӱr?γ]daRZVG3W$_<=KIpbpqrXBHC'hNH:!2"ch`Ckz -TM:dq+selCOe m je (% fF،bKʼnzsXQM48CuD`-N2s,3;/MZiUwҮgBH|]ge:Uc3vqbD8*qG`:Rn7|n;jI{( Q<(E:O`: .BK]1,m0@_| !kGNn={< vrvZOzPkog-ЎY5nG͟} [@oGauXQI` b;)& D@Ŗsز6XF4Lەk"ɏ{iI;ϔ w5u;zۣD@ 4E2z汾ln;N/yzN"{FveeN9}5}\P0NㅙEd[Pi΍c1-;ƽQ OV{vt;7gj ] x~,/fF䨊"(?_ Vw0J'3菺3U#?:Zh)G?-"vwQ߀02h_lw~ -v]ԓh߽95Sk >^>n(QwECr5? ^}Mw=&/gf>&*V; CZ?)53aΨ4cx:;1Ik'  q7!hpOCli%pkOH&c36D*)PY38RcB MjQFJRR p&KHc4Mc"ψ Na `k~ EVSJV7CyFU&AD[#1(AAc zzv ePP9Nk%3`AK0,>ag%[Ȝqt!։/l|5Ȟ(XnBa.'V6 $4o)@YKIc'r^ZiX.,C@Eg~ Ϊ ~/r#h7UAG􊩴VGL Sei @5zeDs#N 0h5m>T"W~r3f Y Vph!Vj~!CEVIB)! ްU]Iw\⬨ DygmQpa-N, FBCT$] rZ8e$^b +j a2 ë`jvIWIܴ?Z}N `Ah52D0 D }e=||QoFE{ 3dYc}I{e ].b*fh 9͛M >*%(P(0 FaY%8=b89yţD%(폨Xx`2p9VgRiyr @p:hSQP1Oѹ8Q=eơEU [g" D,H*wh818vS'C{ AʼnJ22Ή<gZK c weRB/ȬPCyl)*#*uC-\!CŔ'`3|s&'Ԓ`$$ E #DR:Z8t Xt^>.U}qOPtfo4iD"A+|1g pVtM(@\8*$uQQsT\qdZ)O&1SCq#d{ .C|ȇ5TܧYW{[́|:qk'")w#kzkZ ]jyr N![(h]e{S\_U/!T 6Ej3q]wOcRĢ|V e:/l~JGb*(W4Snʂݼ0xAjCjo=gp,O(3:b"э0+ "jus^7 3f0&nvk!z5p"qa]) Lw2 Wu6*©L?dv ˡ!D[[ b1zO]l,bj2{\x. e+s'am#(+^Ke\.M3_?J)Ѫ7L,24ƍ|GX`>Mϒh3Ӫa|6n4+aC?ǧm+jy֦?;"U˝_y ?~ ڛݮQ|hݾX F:`<5 b)n-/nrz^?o3n. l( 0g;S]zÆ 5nÀjb]|׾BOȅ K 5$Ty+'.,RkïYee~۹H{uU='D_]KQU|=EHLŜrNL K\B*+f7o7`So2o?ywK,\UUqJt;+?Y>l]o1'zxe[J0/ɘ 7T3Ph&yg'DQ~>1|UذQxh:ޖէ}^ i2c[hzӤ4L]CN`ZA )][4ޠLXr R;tHJE?&bTj~>]/P^spWz{1r}!~Mo搎 R)8W^Y+ +1:$#zd9|uzvH(y7mdWPy|`uHt(VK Gj@U.W~՞nnXNh }ʁؿ(bec`C.5ۿE޼ݔQ^;σrL9VBpO@8\ɵ˹6xjDf5";RtS48R`%v\[^4 ^֏R=SU8%0!xB߼@8hొFH÷X40B e$KEW10 [|}(Nڏ:.{;|nc[c(9܇Δ8xrReJnzߑQLK GF4Zt Ng>pE3(*ND1 46_"}\ 혾X/B {0Ԝ 5<rj(SRr`wā10C}90 0c1-҆hC]C4=L0*0 HvV$Vjwuq-m*FEL|[Xtz[Me}d:E^i(cBݼ{ZXn\{AiQ:^oHv"?X/`]VFZ<}ʠ(N uP܉P@/Cg6+h\π/AYo}+8ZiZ?ax$DRrb*cVaCk{V.+@/C"ke2/ ^&<4bЂُ1T!ߕ;q::xaUP.y|fWy,Rp7"g5Po6i (%.hJ.q~&Nu(]e1qlalz-i9w&U+RDt]ti8R *E!R6 Kkew:UR@ZD庎k1Ẹ(<_dz/5Zw RA8%JxU+拏i>g93m!hLKѽWӫ}K9_&@iej] +|:_~T;U(R!Ζōo}/>{ќuGEy0YQ\+I^#w<]..6րVɴ2SfvgOP͵P_ckO&3[2;o eՠfߦ161;b+kt}ݨjۮGL"M{y+^mE8~C N*_UoUr_n;OʏKkq8xRAߤb'j7Y_ d|~^iuxJsodذZ?xzW6K+!$ UՙM 0- vP-NH6_R.>e+S1i"wsF֪|A4/wJ6,-)g$O F?J8Z൲zC X^k97Ch_a!qJ.r.l^@l4նb>_>5G5Pzr#9` lZO>JoU:nEjY8pL>y|FY {ɻUPO .oaq*]BOe, \1J t~q01p~+dSfs5<3늺֟'r4VFEf4ZHXd#2@Aʠ'ܲʓV/M[NxቦV=^MwF ptUT|˱[ޙj#yRslY!tuz\T1pJ;ɜV&H[Kk2ynX2'TT ΊyM%FA5Qp>5c'OcnAI )Rq)D-`;xq1SY/3T" mB5fƜo#Z`%زb|]\ -r~:2;C&ߛ#lbuXQ뗿V1_w?}9b_>C|LN^H+ꢓZi H땔Q߄VD(ɁK1!OC=D(`!:~ƧL}tP|=  xH\G?X6RwRuoV(!i{as=leޚ1[/]qd`G. Ep1C<' 'ͨ8༊C |R#̍IlPCb;>sLwnoꋶ+')bRDcg|]A;Uy {+<pGPug?$E V(`<~\cd5mXSYé IGMPfi7>х^>/Gyk {^)E^6#b7nFq0p}Fi o1P-A(wbg񈾔ũ̙[mG}҂@2I4j啩ЉF/͗GOdeﭭƴpZgMUVp[8뿀rCU NQ=hQɌ I7^Q/roƀiF$V*c㲛1;{Sݩrval5c7Ŝ#V:U12mhtBIu}Z=c}zQ2@!V]-HXy7 {EJLM3(F(UfvY8eMܚQ Z$`ByëXKҮ]KEQz Ƃڤ:'3Xb5\beQmgF|"s]ڣn=#0hOepf=ݒ+\ƚ"#ڗ#Eh $։0S52vj Gwj |*˝̪~r\k+~;"jFݽ^pO-M3Pɑm=\ %7. E"cE" |,rV6i5䄳&S/#^JoF.?ͩ,Ɣ9/L= 1UOc*yh\ɫyƱ7cED<\,W5jϭKuLƬQYsm!_Fԥ<@hg|Z5\a/8ŌS޿IRF:xV80*|`;ij'o}q܇rIs<˟J*u;h`8 |*|\WU=b(pQ!#M>o-VG0lFϴ^y_P՗[2Y;'s?Mjxz*0)Q6JWUh:GqvA= n,R.]!%{̧=U 8G"1$0a{]}>&ㄪ[""ぶ]Fm?UZEݹ204 Μ ddU^~T`F!EFrZKJ5hRM❨, V ˇ͌рP&kU vO݋zMcB1We-+;D9VEobn"C3]I*,/R<q_.uoc],=pu#P:]Ղ֑Aw zJӔy[Qcr 깻6 | Nӏz;OET.{3] ."B!V5 ֠;k25mqtzD!ʐQem)$)j VػjT״P@udjm}rl,0ɄAE_T4#Y;dB` ƃji?*D.{I][>wѺ3 m=ܩQ)t4 Ο3ˌ}M yswFe|~ZvhgGX߼ED$!&|D}42@%v3#w73 |*|~ۏG(=͟F:G]p^jDxkDۦ Y0% AYm'cVvyی-#Ǝ#GʗKTMƶ'35u7ưF Ffb~h2}bO6Za3JuD^fy\.ւ]" xS% C?{>'q{=hT~3 ى@DO-*b CV(S )=0{@KڊZģmV 9C1#1+!R|r_9[i̛cXcUI*Eɽ.-QUǼcW1qPҭmP.`wd5c1)4 Ez(e+cFrۇ]V XY'nZ1><AOesꦓ?f*;x `!$FGjLh\ijmr#씑7G?1Pס:*=heu Z$X'*IhmRni߿~[L|Z?xbW8<҂Rb W 6yz}?CLuM;ܣKB5γ įŬ>c/5ݮ_@#8rXzxMۚzI6Km.ojUk\Tt)*% bJA RP%"hPU )J4=FR62@Gv^ElJg5%6b1jr8xAm'!.N[9v爪LSAUϷ+lKrܵbL%;DJ{VD75E嘆AZyE2v_˃,w*cp"Ѓ3@>*Ӝ~6#f9b?@h+U <ŹiF$jFM jBf蛁YMGOe-BG1xxql69+-@ 5QVNZye$),;S50S3eͷ4@^oGt/ (LٌuLqp2^k8cB9k$3bĀඦ*_DOdMi^GEEbhxq'DQ߉:؄9kK }_ʛ(eT`MyC5/*XSбDhŴh)݉Fk"ptMBT;5wrT_Q!Q]؀%K<([ ])ٽP4mg<"u{&`[V4L5M:H?v[;@t_FEA@ẂWm=kZ)*\EIBy0$PYF5EzMQSY%Fjd[*!JVLŮlWm*GOe-TC_~QS ӲwquVnxn3gۿW_vb}i-Vӗ#.#@NdYfk2 | Z+A{=YlNޭrzj U'bEI#-L}nE{6,/E=:I  x~R^5pEI!(: zSN^7Q(jb8=+G\2`'l+soNŪFJ"xK5czq+~F]h`q;;}؛Bl$U=]GIJ N@ЕТLIQ$|e֛Rƚ+n 660bwqAo7`o,Ծq>bb sP7bxU8ZŒDah5yIxg[߆Kc#4:8<Ԃx R&Rb. *7H0ӸK#Yڐ(ɕj@B|(40^< 50ob [F:뻫"XCopzWAIxY=Hd( =IdtٿRhkbm9rU8jJµ:ߦ3!לrj֋)\1N9T[eMu|p .W$%8h5ǒ$J&hT>#>4_Cy) ri\E:7bDŽ@-Xb$W-qp"3(X,??O'n l[~;# fq k<Σ*7 2zM,/r=.%SAu2UIuK#q}ȍ-B)\rD]e|*jq 51$]*抰Bx"+IPNȥL~BsC!04QF袼ϓTV!QL}&R&z\;zaŊX΍aћ2LjSGMZP0ˤ V.oЛC' i\ec^B{[Y*ѽӸGXJՒf*V f7 n p%5jAL%2h7 _c`WO*ԈWP{1nG?<t!HpN48Prvv{w'>{:!{XӪB&HY8V+z\m/^0Ly Xq<>~րE|K*nbwv!~~u(yCy-2iɛV}H k w ^LHU_NrbOO~8~駅-N&*~4k~oi񘶿<40/|"v.'X~#9}V?cF\v 2+8oMtʹ_[z?VlOabBnoiqIYT uTOOfDiX*CQor&R}NdبE&LHO0I"{ϕ@M0Q[cd)ʎ3D"9Sqnx1Sޤ1ӝɴɋ"?j5󭵑u|YnV _-k 8[Uͪ^.Lգ\GbWGL뽱+^VUA7m[m32)ax(IMf~7ޚHYքKX#:KM|֊x|ՁrڦS" 6S,|r0"t:M] Pߤq[ޖȹ§z>>x~cI)Vd|^aFRAZBiB=#qHGo+\=Kg >.6LfR5֒~Qicu^7+S슛&!!x)(>|?!iMg1>.pNSL on?X*Ha$ڝ!m F0]{l!5ToϓD 3bcߢ)1b\(ḰQe:%2EPZ+a~\mڈvj͉؜`WO;ء*Vuy;=V$嚒eʢ$UTK]= aW+܈;ۋɑՇނ:b:N^sWKad"P&A}gNB'sn"jLMD s #+4KVöek][ѸNbgZ$5r<DI)OEwxq YBNFr7TXx+u<8FWq6-/NҬ[aF?It /IQ2𧝈$Ȕ*$sZbtٯ7 _c`~=ȟr#rgs/<[Us&'~3Ph KF,1XQ-tr`!U=]f =ko6E000fIL\@qR3ݒImov0lDYdX1a!ſZ6Sq~W&~4 oB\̶gBa]$+ V#c)G{seu NGsDݲb>&~C$_eT(3Xqŀt"!Dg`]vJFOnN_<̅ *w!j&Xuw@Tl$ xYE*B|=[~o^̵w?=kG~ؠaQ@Si;MgQUg2k9o^!;?"~Ժ2Y;?T~Dj62(vɅ yfDiVkDrM:=^#6|\];DJx}Z)kY˟-trgh QǨj|dfIפkLuIJҫSjENv:նmBKT r4v"Xޙ\HcZFTuK vPXT!(7(>&ʂ5[Ǵ4[+2)αDt0Or<+4ƅʵ.P`.h 2[42CFK炅܋BuIcE,7\VJ'c%b_/ xw~r,^>f -{I 6$K%:R]Їa׀R/))xzGQꜤa b<; 뮇24( 9%0GZ Їr*5&dK\,ʼzEň:*NJ-C_9B\PBO$IV`'I1< 8ӵ!׬x?=WuB!94LrevQsč!-Bo}=wvFCgE%ϊku>MzsJ)0^Jʜ!07K l"QחPKƴ&a𱋃9å9$/m#)]>YUBEز+}ư:yh BQ#q" L!pBBj .޸ƴsq ufn)OjO2}ݬRǹE/xTuݟQk=SST 0*3<##&6g .F1V緃 X/z4+{HG%T1Rww)z%h SuJG(+~SFSNn7ۍ@Me3#LK˹GLdq d‘0@gƴމ1gy H! cգhXt;XuJ0¨ݬOm4mC(AH ,1]aY"; #4'|È2 =Gt0i%g,'sa1v~1i #u2.f4iE}$7@$=4JYǻŸ^Fkq7ɓ=bJK=ChKҦ^~v]v1n@NXLPQ7Js,aH{Y4WL]k FsQjR*v9!=e,,'920,զtysv;X y 8,m4WL-3/W̳N^q7C0OF:MZ7*{p!0dԤJVc{H:w@ռ2%څh6CXratA6",%#ӏ0ЎqHa&&F2ǵgJ' Ьy B!D"㹃0wFڠ-sv[/&n9Eq+|W Ոs#(ґk@FtP`|SAS CPӔn!)g$)"J=1*6 c䜤/Va$\V(q/@v#>yw*WmD](*Ѧu->@ 9h=_2,>{M+cҔ6 ]:\ Qn7C"A^b4ꛓCh lW)RpA8уFHLphp.u߆WzTF/V"zm^V^lLQY_Ea,U (S33LT2q~ YFæi^9D\gn4:Y#Lw٨Muojq>Ul65B-YۋuvyoQT+mgh:K,v amNYj7L cnM]#+E-/%宲i]mpͿdJ,yi)צK֎ձX8,V\I3y;BQz%۷ ܠ+HtV? 2@nT3XnqF1Sԟ S8,ÿﲻPpC5[?g (]8@mɮEb>Iy5ǫJ2\][y CrEohwٷY O6l'3 }JfX?Q% YUȎ<~b兑ꦾ8Aa;3AJM㉸i(1PVNwz Uӳ^Ѵ(KSs6mJmf AOmԓhЫ;:I18-P'`1YcAaҚA>$ [Fmt]JyMzuQ_wQKA(OuJ|ksݍ L,؆LnFaiQ,-ST6gH`BRVIXҗM @0bqsD؃ofM3-cMq_1 &鉉sSVxKPb{A`\p-b!|MRcy@(u#HL/qwG{}Ed`쪛RM]~Ѣb \yn:\l 4hǣ/i sqbc3۪~?y~t5y?_ 7/W~ɳt'7({7b2M,Uq7VɎ)~ZgG2H≱kWG1_$~_ρ#u}ǀG_No0Z]X @ڰiMHyΉ%%˽~+rc1W[A%xrOa`Gi=&tOI%}K_&8:q=hyBw֮2,峪ݬpb7D ?QL`h-!Ubܞ_o֍\ jXѐoUFh~4&v|qW{SuQc6>!b,oۍ]_u~Rӛz#?:"vW[ע<̈́|jS^1e1e[wbmpK6-wW+'Ž!0fi ^7;7k|`B۝ZY5qYuuyIz?2Khj8-UhY,{ zT/r˰_"ii6{_~RJG'<9b_oWȄ9}ٞ<]۳m,QK9SshS\Ϣ:y}h /M/q:W N| y옄j(j\ ׫Bn-vU1hwm'h“/V4 uw1̶m.pP)%t r]~޽wɵ]^Ǻ"%q2Q ApOb0`V0v 4Qoa@])>c7լ}24LK kJ+9D*7Y*<7>n}PQg!y}%TUgg=D+x9Ri2'=JE.Ie2H!(n X :omǠ ^}ٖQ?cX.a3U4nB 8"8nJ2ƦTDmѶ~/$=tWG5BN3 .Z ~r͗vXo9(SZEfjO:IO!ѧ<=#]X7. ^Lݻ9;;ث柿3H6)/1g R?Hk#5s8s 뺹9Ql ;4qx1exXfxy80Lq 0*O.S0|ג@/)Oy<4|""nwu[2 z\>2g/]>87VƹlT5xN:kآ94W% 'O7_A!{;,wXsVW#l=dI{WCCSDKGn > ;9Y :{)60/Z[JqGG@@\%Ohߘit-x0,mXUs]Gy$9/*udU\m!<<mD^J4 1}4Z~_!Eyyza^Jhw.m\zz⯽wYʹh_lgzlÆml9j" X$)w9Xaͣ9;i_p!=5Ym0g <Z>`0g<}B?pIK p+"iA$nBHˈB@BJ;Ty,>#bM!#@|[AcN;#D5u L~h2FHZ:k̀@4-ljmY$SrGUh׌94[[)u\F&aGue:a^HCxAmqVR%μ.~]n) GP#ԋ-puོԻt  x#!  6^7.H}y 'G>y6gR +xN(kH5{a P \2;")jW1`I C4)NΎ xΎ`#^N-MƅY/Њ\ T֥1(h`PfhxY`#-5rb FP|ގ2nZ-!@ q"@(F ^_>ژt;"{_כzY>\_m}.Uo/r`OEt ε!F`-{]*kH!7 ;@)VyQXLx_lʎ!?y-nt o]>'I9rğ@ ASf(o&(>'~2G~ O\N.K¾ GK.Oq$nl-_N{v3bx|li5ISMzMd]n{":%۾]GG~0(T=T}y佦m4-xQL71%7yy  M MM^3207zRW1VE=ˈ{@`BX.0-K)x/E9uzy ]_>r!}$nWc c P p4Ào aJX91(\㦱Ct%SGUBTqJ0]{]yQL44Ga P /2Ykb $I<EJ1H W!sVls29R%eye9b4U@VXrʹBg0Fa laЇ?eJQڌVDx%%N\:L9Ga( }{}3xS`3Dazk YѮ6^D3@sJ&ضͩ J}>x}diWE_G<-(dXI=W> ×KO7C~@ A>Vk+(BԑVY26R@7:P3sa-%x[&j6UdIa P DE$L<1Sߌ@<Nw675SbMVɴg4˾j:s׈>>R*7+pk~#" sb4@әxO6y@!VǼʀnHy/{w?tܯ{꯻ЂWMۇ Ӷ^ێ[md{ @h9v3|0诃X?~*O5 =Ƶ4rsp& i Y4(aD4,ɠjw^~FGkf^YXxTL @v[ߨHvtq^8WOL8c^(yQ4B>^N]oO]D{8 ?g)1KPB_߮ @;]:s3EԏKtt3f&1'l[/D 03mv@/_ 8r``=3S)p}yF3D9ʼnO΃.-o|Z(ZQrٟGI3pn.v=x- t2WaܙHz\޴ Wq*w%C& YLEv0gF}4̄`5og˘&AW5Wz}^nVÌw^VqBH E3f&zb6Wl18cf爙ߔͷ)1FD3?/EO [T3s͹xkB8cf(~'TFlဓ҂mJ3f1{xnJفW  d=(} Ibi[0/J1!e9QB* YY #Tby9A/"$OhߘiB ?69.yk #l[=9 xBAo@L C2~٩=93ɪ>)53zG( LYmyf+ b6,ў>9ܖ m(q;p >]7 _XU0oB5M0"<̅N*Q|rnK$@W2p7ÛӼ5VS*ʲJ-DEc#/VӇ~_e神Iegb|K0Í3f&HM/8E 3f[sSēMelLȰ9&<FTbyk[3S!` z#Lmyc &mbK3dzwqq{,Dt`‚m8sZXeH2WHa,S<;rV9Oejp{,ػ߂$F̏c7`7ef>39VWд:`}sQD=*03Lzz/* wt1ApUB > ^8kr,_rr#G;J,#lҊ.Zw ,KPGP =.#.AV.[_>ˆ |. @(O\7U'`J:Θ8DݧG\ .ok3KcZAq#ңR<-@u!95&TKΘ8t\]l֬zΞM74nENRC Ŭc{8W}lI8xƂѧħѐᐒE"iR!e9 8飦_uWW=8NasȬ&dT]u=S큇n0̺Ѡ*"X @f5O6Eg#0R􁔻guc_;9/FpG yƊ{)Ζ>Lc iIi8'NK8ͧcJw10χ;'OYwCW<~9_7+ӆ 'UUNa砖]b{nd4, JՓipqP#ӿG.a[/q0O= L; \Z)22 X!z';~] h}*?P6YW{>a}vu5Oȴ=GtpT/:Y=Gv GqCADmċLr.T76g {wZ~C:E7zwIX)!W#Idն R0. sLB0;0t{;%v/P=d=U*JJBIrEK*Rl,ȝgt s~naОϿס{XmtJ%rΣgڨȵxhS,ʖ>IB+KMݥY%(Ɂ2 /ƃb2%g7Qxy4B8.׃\zAư\*nY°\gBa4zy؝{{߷oۥO0ʤ/S2`z! 2N+J"}N[i0 0&fpRN'Av? <@O;_&Z2۴'m91ڻQPXWWӅ6\(wp)@% O=cF3@cĜEhZIb>`^Z gCzwiFatٯᒿ-.Lelq{T< J_ʃ6 ߾>u1@[צKSE4}쯓:%qOE=_h2Z6K{IL4LxRO]ܬRڀMg5mZ;gCZn92]HqHʀNb>f]p\j+e4aaTe'oE%o@FJ|6#NF\G0MƲwxj -7PTȋH 6b Z(5ȧH(Ah}Fa5QX I+PR2~ &RTg4hiB$ Lc,S3|*ib'B݀BŽkhhtVamޘ(5ؗ~7Ghߕ`SRDݐfG4ȅ-~Lӓ O?tŠU߶d ΚMrcpLY}G$(XS p֑X8ᐶQgLrIϿ8 r֣9L"lEyT!0' jF9G'<giS93(&N0HdD-` +TJ% M;Άr Qg1j;,"G@qjjO%1&RU:A`t)WGLQ1aӂyƊ(͵?ʀ4b#M~P3U֛ u87lzgfmhFX8@kU))^J|H 3feщhx2|غ۴i <6m'nY?.-{d5J*(85XU P`qcSH # dڼE:c XLvu#Ʉg eqcRQbEI-Qʅ攓ߡx,4g"qCCt,:iau٥ rqnص[`R(};mn iD$Xͱ ,,w 9}<]¸|5"XZڒXd"qWiԳp^!&7`k U c |܀1y_ǵHysQ b9&-l&P[?OԦ5g)yOʸ?j. 6-@H0rsZ3KY"-;J@̢YRsIlFV&P.2Ky4H)ʝԤL:C퇃>~Cbn豓dYwi2 W_,)1hWXQa}G^ `sa^/^ˏ_%8 Ck\Ũ^? )t ThRG'ȐrCCzKqnMi`Tw^r8u;dKk,V8dݗޭ3u躘sp(Pʍ"֑aYqGk.k%G@"Iv­33wڶNIS/evݻ#1\;}͹o&?;S ycM s3)LqoJ.WQjj{@6掳Ǒb%~ AuM7H.vaw-|Es'*ܙˍ`nv\'|;%ws)?Ae1i sYw.}u/ۣ{;9ȏa+N,N7^ӭؗG0vt?5G 1Ş`mSB×r؁72@JF/t*%0ó7exck :cL$oe_N_lB_Wll)^f#V8gː'g˙r/g˙r'@bV{_PreK[@ٸE'* KmJySyeJ+h$$#/+{O?13NoŽ؜Z=wˎ=nx~h-W7=t֋,_hl3_Nl&%-w,LgUq(_lѯ8d~v6b/ΧOr\>-OrܡpH\0)u s\0)7r,Na\0)u s\0)u s':NaS:Yr\0)u s\0)"bA8\:947)u s\0)u @HpT-`()2)rJ+˜+#+)rJ+ʢSz^YU^tB"%EXsּt"xKk<k.š"k.š>Sݬ=M*/~4(S{;xxI=D`-?[+LQbP.78J0 AmND89Q]@hH1ҧeǧMZ>MZ /&TlguZ\~ZN~IZfGAjY |Z03|Zevfv5'Eaj_\07*bKZ\sҳy450`Lm`,n8h$1קENUXֶXݼ;W~E$9:U0.iubWqiH7% _?E6Q:v"=-jz[@5,vqrOo]H^EA;]gФ1VpD@oRDL J1uǦi¥>7c0sǡ@0iC(<TapFb1\wf.P[t2~b4]emdGMk*;rmeONnq$^϶/Mcْ{/CMeLiaݵd+v14C`ٴI5 Nf?GU}xtL@n㲷6,qho|/wRؚ tضŪ}Vx#x!Zw̷kT0h8nם6odÓٻ6lW|I|-վ#{&E_VcdHɱS\Z)Ȗ [ͮSgNYNO \'RAsBu:quB i|5WuPq uiM X_#+ãȊ-(oմzE`ѫJUZ[wFBFdHol0d0m,A}+LTL*p<~Zzsp68nq mG],rۨ d/d$ S `"c|~5Coy W[ &3yۗ_7>_+96!S.^2J.>>K9<LΊ}Φ IW`@lC=Y6L&pej?4lho14UвU b\\q>.I2:\~'kiM*mU-`YPB% e1P>YG m͖ewf)NafP`{s4ki5<37DVr]b#UxW'ׂ_[t IrmzLR/cRVQ'qJ+~GCO&?Tl8paSj; [/$lOMi?U67+*`\r+|f4VF8֘ "r+m]#_j^WuRTPV tR֮ vb^<|j*b5hr82K?w`Rì4C l8q±$ǽ9-D)gپ5Q2q n1pU#:,?ĤĠV5:r1* יzcZ?ŹL<޾0pb?U@ T߬vPaOo&Q)=h:ƫ @K@5:A~{s;WJFwt_7(" 4N}7<MΫc 7$u8y KIa9m3Rl ۂz^UTHjTfj:X*:6+^f mh7^'tF ױC뫮HxzFn=jVƚ6OabU97ŭ YeO9)wuWguG{:%6}G4K^ox]qr)IGq׳x]cMP fn3H̏ƙg:o9wz[?\j6H3i5b LStfڝeڌДkyl̎@sj5^h\Y{ט> |b'i#!| 3V& ^b:>o`V%R'H`q8\bҀ{a_RQ%gTg~澟g~>F9j$ >gÙp>gÙ>sÙpf}8s܇3}8s܇3}8s܇3}8s܇3}8s܇3}8s܇3}8s܇3|}8[ЫD^~ӍۦtNk:$N g֑%$m[~Y?C]un-'+{t7pceo9g=yyvgƦ`dosŧ2v-ٮ8fttGH)Jխ*:)oH^yGb%!Kjydc궃X{))qJ7Dxb΂wDðq$x`F6PMV $-MQ۴Wo>nFBpOh?ўې^!9wTfh޲MhVٻ/y|~c`;@n}^QnX@Ba~:7ƒ=ɥ6OޯR)|cQy2 #U((5cQ(^j(\AnN()yaRh m!M$i h3 L3Lׁ ]Mg}ҡ),7/8\i= [-6($IQV)-"# Q;,q!mI%y|옦r X8wEVIÂQD YT ` %B# b!h`!rŐ0A"irtZ_]èpD|Kp.b^$r 6*Z&৒)Lx CY?W-?R +wQk~i Qy8#M~P3·O~Z}MTGqׄSRz)A#  (IG'^t$md;h"VfZcNZ*rS#jnZ65Nblt itYں;6zv_3Wx·̟lu#Ʉg eqcQbEI-Qʅ{rc J7|/+c_!lZ:&.-ǐYH&wlB : GI( &uDejp\2Eeq4;˜YeFZ~p]yoKnTn ^k!pz6mFZwR-q.a .aZሿby sR 5Y3zk.b%FQ1G EmJבaYqGkZ јQ\rһk̀} pD{Ge;6 IE*+}T]`x?xEO*p7{~چt\=L5&9dz&5.ú>q13#o(iЪ8QCRTQJM{Y+_z08jB)N @cr1DZ|G+/t|V[{}|C&%LJ[}e&{yJgRCLз8JoNçW<ɜ'$sdΓy97IfyYrdΓy9O2I<ɜ'$$sdΓy9O2K[Γy9O2I<ɜ'$sP׎`)'sdΓy9O2I<ɌnipKIΓ+I<ɜ'$sdΓyF1}7>;ɄM#pFFZ{x_y~~/sPqF-E-423_<v~V=)j.]u%^ǀ4X[2Yi%Fڰ?sj^<6=*a8Roǩ{7ڵ8%n?ආQL91Xy> c;k$"_jY"M5zQKa'u|+vŔf\$E@u 8U9^J-c1˷xE}X2ti8+ʘQѿXe~7Kz8$*8ʝ7\ LŘ8P=d:FY^1Tk%igu*=ĕf4&]#y3[7'[嵰RB)I ;&C}E/rSFMK$IR8AKJl^lUr'a2QuN%,@dE؄ A2 c*5(K)µDX/5^#ϺNj:ө.} L[e"-GYb,ac ȓ(+$Arcp)vDyo:tËYXS`HwD@PTRafXT_!ʓH&N nzRRc=\IN^rVK$+c4rCr~pH?ii~Zh#q#!@֚AR%F@-"<,!ǨT"իH^.v[^WĢfޟ҄q=vC wq*z\y/9;)"Z49?凲2 /o_?LbOUjxŸW ?-^ P\ֹ[v8]׹n Gh1ZB%+yو79Jp52ϪYثJ"OgoC ن|ť ɄKٷӟ&,{y0Q$FhF3x IN{j5K̦6_^q[Hv8RCuZ,it1~<*\GW`HMoĢk{>PzFUf[k)$Ud_Kf^XNPC+\;y׹2zxCm6MNZ6SNrusSa>SɄϯJE)C%;_ʁ /(_.|;ٟ||vP}v#uGyq f6hѫfiQ/>~|=-(Ri9CwwìGP ʍɡx?y:03c~ߞ-+E?'-j#ېVuz-JRy.I=뻡2!1|-wBi̍>.|de̖TO @l 'S0vc ?[w0C.:r x0"Lxe蛵^X?Ku;u1봱хh` )D%u.Km7 8uZ'X}VWi5 .> ?%{!uʍNh#, ȫ=zRfkR`>GQϾҤs [' _eRhWƗB긦VSi%GtQI.Q1B]RrCh;~ VGQ LTGq Er:'bE,lpI*M<?;}PvGB2ztu%n^$|=8sslўБQRCG r] (&RDp~X$@_4"4GGҹ@MzJT '%&I"M^DjHY(ULնbR Zj qDI nL0Nm=(RP: 9jz,XŹQ7[N qh&5}1,/gm747߹A;9LbQ6)t8}5Y_gW6SŠ??PqwdIH PV-0joą9/6.9(zMV>:v{hAT\RZuA634'QtiLt% Q@2~ܕܕY5˫# A&sGgZRm1 hpD7[\HjQDݴ\b5 Ʉ)z!I%ED6 ƽE)2&!擶NOT^NzR'T/O=;`>30 NF!|j*#J dVH= DcQZB͚2*ϣ GrMRqKCN8@t5f=m8[7%~jYjws޺$5S4i㷖9.o<61"AS &܇S>?%;Poa,!yDO`kqRKMe UN'5~RSb$Z阦I=GoER);0Õo G-I]~M&^`oM"S(t fu;#,2hK:ܗѧ`$+!m/"c&:.(_m"r_5+#qhn$ @eRVG^… H䌢iBB>!]H0+ѫ$zSϾB%R{򥱔1x QSVՇە ~n򊶲{Ė9ɹ'6mHXJB>Di]27סt;-%D\w-Fӆw54&lRd_\IsnCU~7C hglVyz%Gk1s+.>m\hlEO}7ZQmGvӍܴ;s}YiRPݷ*~WcVw|uAoZEUHXĭ~M9 /C oe]rog܊Fs[z.z|K1=, _,M=ni#2w2[J>FѴ uTU8 j5ZG6/s+d!G!MSUsE\o=5Ϋtܶk/mNg̚oMjO*Ψ/s־e t.x;=qQ JXXQB,po'=8n2 ֲ=_PULbBʘ$!\.d'2$9'O9Q%D#ZgNgˡg siCi! #ׅx'=o[7@o_@Oj>iB_(&LS1Mżb^S1Tk*5yMżb /!eco.}ιhp٩m SA K׀d "R2%7A:ʒ1cAm{-Kc?ǫShQJr8&EД8e"$ĚmCP W8Μ6>Ҷ1& ;޸ I@usk8w/SM*^meYLι* @g< rJ'#c褂NkgІO]= 9YJ߹w,x*&1<(WKg Ȑ?{gYboT=$tƝ_6)E/~"uuqi+C2r*Ktxx0^T2IQq0 [~Z`PljTePN9M.Ə0#X~jt1x Xtv/uc2s۹@Zը _ތ{Qt-%.VS#CדQ|-ya9AE ׯlr0s1^z v=49 kRLu:uTgNIL Nu$(>?>js? -RLJKv,U^*Q&]=9{ww?㳓G|UWZŁ*OgFcŇGS{s~qϧ7l"LgΚ&+1lY1v-9=iQi7%4߆soAW:tN] ш!ŅR[,#Gv8Y^lN T/pz8o7O:uc?$(ڊ#g#ḧ́WYů8RzNcf:mltjf$X'mdBd,QG- %96xKSё%|}%g}=v+O^pO^HXB7]a\76²C'6x! sq,M:aM~7.P }RQBy*պ~.j[:"*FhRߔco9VGQ LTGq Er:'bE,lpI*7~`+?rգ&~X&×nzR t't尮h2 z,e ?/Q-zϋ| xv<OaUÿHq ظm2h|ś(c:p**\# 2mUu8CӚۛ^>Ic~ۅI\JBTJK'\,~۾;x+SR `ӽUy4*<:G3Uъ<-2gC9ƹ:f=ސHf0g5;u+| m٥CzKp|X.4'Z'?rIٻl~_slLVM^iJQB0x&꿰@6`B* R't/J-wz}j%'iWdL2)&blOji]|it& 5k r SFo@mfi%.*/ԧՖqCb=Gn\Ok=lq ݝ:\%)Unbk:U8AE%OG.ci*1 ).rQv<ΓGy(;(k^~ 0>E%eb vb< Ϭ6F(+SwQE/ů5+>A4 ITzm~zׂNW`08jaD,Tfń=-r ZM#z{0A?\-ÛVo߀:$w3{i4ɲ6 ֘ޛpK ޜRuYiq:nӤZ$3R N ME2FԴf|_0e3h`k\ vSU*w4|RK3$U'/arG~0FDa|(ƹa K3R`KG%k)O3TuyȋH 1d9ĨDiDA>+$h}Fnp%H+eJ h m!M13ˌ( ܬ:CYXKa}İ{f^gCݱR$!uZcrW=s&vDN ڊ,e- UB`N .]2EHA̠5DhAVLZ"$"MTH+T@"W J(2fg٪<;<*a AˇB&y6#0ũZ"ne~+10Zτi^P;rN$jozWyƊ(@̵;ʀ4bƃP"MvL BlPˍIsq$a"  WA'L1BJD4qFLVqdQM&^gr07Z5J*(85X:S+%,8u`)F-ߎ- zx(/l;x7!Oހ;F Xm9(%A,Ē!Z )'V1c_&!yv׸ z=L 3c-~RV-/ f< Ni87$ GK62 mhq7>hUT.Bߡt5ۥJP;9~ʅ;-`EBIGEqiJ ³ޗ^[o Jucb{ =4b5!UVF)T 0m B{yOVi.vU/3,M% J Kn`E-y~[eX/b U9ZҲ0j-g6geˡw7 CGN!g='ٱxo=J(kBSW}nhb*I$ㆌ!C#)ST(9("N˙BWn- cz1_7dsNލlI/u'gKKs=hEݖz7%e8|),7UbEPbb*Z=y5.@Q)(b%өlhIF"mÒoawnlKWzgz*³R`Z"YH'rHl^lUL6˰AڤY/k~Նynn~$҂aq%rF0f < HB$7V) WbGԺj$P^'6943Adcב Hit,14UTfEONKD4$d <=t>V9d_5X3u`O9 1 9 g*9R~pa;]_oIK\zW,ҡ1>E>c ⥉6e^}p >>O?;O߽x:?`z`T3/nCXOPY7QzU(=+~y?t_ߎt=UgS|Ǐ@ /k&ΰp+u e;w7`t +Oڼhh(o\4UleV7Alr:_.I5)]Y0`} !~J.x#j%8Uc,0JJziRc1}1?_v0,U^T`#)Eyf /_6Қ(JU+f -`YPB%6{%b֨{ݑ&<@Ia0KYt #5s `. ?vF{-) [4747=:(l2BXl9zsNn 0`24.Ül%WZ׵+9<9"%VbR%A ƙARL0]fVbSV[]Xժ51 ND ‘s!-VJ"Gkep{Ćš>Y h>sUCf׮aȯ?;OEvql#GѝA9^muo68 3 8;R`D$/h[pԂGZ8礌DpÒF*6Q ISaE#fGXTL^ KI S!&"!s$#QGCa.t6qvCګ`D97r 4Cw Ӭ? 0a&[q(ZT| A)\"#S4ZP2yB 5Ѱ52x,c,}D-9@1X9\usl:Y[FH(򥖜N)bxЄ9/+,C$aP=yHlK*vo{ Cu꒥TO Ra {+EǸ.?/sLc`\b/- '1bemPgnIg ZGKαS1՞s:FY^1Tk%i-Cl_^Z~z|M~ ^aʺݸ%L;566X[ ?)U~WZ?_ި 8.dڒaJ/1҆E^8V_'![>EY yW  ߽~CJ΢+He1o3 Cs&c)}тUTExmHwqN/p.IՔ/ָ\薧KDxd!R&R/5eDDL ` Xy$RDsӹ~VAkS&OSlw%-LȸNՙ7;y-Xhm 4wFm6*GT KIw36/1\Л$n&מrk \ZI2Jka0mP5Aks=l4j=Fn5M`}ľTY.Xə\9orX AA%} _5sr(0*hQhgJ1LcB-SIh!@1Jnee=g\w4}Q3[eMa T?|:G>jh%&%'P]$E\hۍnuu şLugz_">*~)[&A}|N7;5fl"๏Ҳ~]:OŅX046/R|_ tSsr;Rmnx:@}37\U7zXOZg|]zz (N5ucHm^'?>*>aq < M;4^1ŞW"_Z9SpݦʳbE›`q _1ÿߧ?n **#4"GnDž^^H9&y2Γyp̃Yyt'>3b0&.bH+͝L:Yfdr0%ԚՆkI| #X(#JkΧhOͰ(dX@Ba2fʴ6>p\WgxصƮ%]K"5?Z/E3<*ȋH 1d9ĨDiP`4JSC <(QoKiJJpp^tjPҠmPV# 4rfa;Sw˺.HҡԽUK?6Gt@&;^n(Ń5($IQV)-"# Q;,q!m\܅$kjN˭;ςQR*aA 순#4\XJF9`Ť%B"!!*H ["W  J(2,g-]^sy\k _7!KUD@t;(NMN r LJ:bL4 3u u"P|$WG\QWgL`a. `sO2 "!*o @`ɯ@Ԫ j:?i)]TGqׄSL1XM!R"F,A)j&]8jQ0}6J kl O)GۊoYH5@y%; G̴ƜZTVFfCNX8u)(;i/"k{YJMBO.Ejc Y\*ǁEd³^[D2Ǹ1JI`(d(BsIQ<`,φ騍C+f $Vg׶кce x$<5qZב3VZVʁD*EHHNBc=Ebˇz~s*i7^W_K3tK*y^'|s쥏WwĦJI}ZޯqUzoT8"HagŒ+!m> /fy r!DDjÖquҩ2JgH)T a1zgV#)U)yIN>YOzv ye!lAbf)1H)hǴ,wG+{g ؛sNHW8,ii(v!I"Sah4L`]( +4Dd#œ(aHZmNmorse#ʕj%yW`5]wB|V%PaFZI+P:%MRyHA)c. :v{Ν Sw=<&PrV|dOȡS5UsŮt?߂o0ř>SM2Qs2ZjC1T)b֫JD2 <XetŲeE6Ftny p5N1W/!6Fo nZ"׀NCYG1*jא+d}VSE}λ={M֥:c̰Li3DC*%6F_TiNC3Q(66;趵[X8yiNحg;8:t;#lOI"nN368/XYU"/b 5JϽFqָ3IM󢕰4G^[i UD+L]L[C%Lr]Z+>˴ f%t̍( ]Rs֯$Ł[M&^exm+JETW2/%sL+IggUIkFMڗod,Gn5=8=Y}S˶YZM&=EСA %"ƺ%ź^ת17df0= BțSHId4`g>ґ։HD/M m5%jA< Oh%UD<_`tS3=m5%j}}7KGmvh&-Hz3ER!1?CP ٟ*6r):_v? taK~_ur* Ğ˕&oxq8u,&$ :föc%_UsYi˺/j&9ژO;kcWS`B<UX-zg՘IDk1hFs+% s`G}A-ۛ|_a@XFv巏M힡N=O1lٖοȝKL9t\2R̈́T {w|fwI@wsk2;uG.<rp\:i޹ /.?W6o>W[jf͕:Ώ7r]pY+Z*Sbu^wMg)v|!AI]g3(E,c:M ͈ШM;X[esƜg*+>Gm_[!!U*[wF`%R0HHJ$R"تH >O6:(ժ|+uIs)1)qZFe)A@`֭GgS1-18BX>A27`q..SOF /hlbO_:F7%dY<]h|K [O,101IIFF JYm;Hx>a:)E[HwD@4:&T1'Tfi%1  Ay+C1g(`ܛ0gGd ц9 1OS9\*I% 5$ e{B9t|ۛ2MU*Y@Xi#R;&F[C!H6e'H RYFh֘pݫ7K?r0"Kp>~j!|o. SD&L{>p@~5Kz( dl\uLzj3&=E?/ID-"LL{]a@ N!0]HA)L&]M@g+aYiC(XsH}Lh\w(s0c,>-w痑LoH76"ſQF!+Jxjw*=S.|5z׃_>LHR$N{S6lc1 E?lo~BXm4?Y}wkPLΔrtW6 )Fiч(Qi% =yןz9Jg%h,]>!41ޥ&b/Eįfgkz, ; ξOw?~o?|>-X?LЬ:b軥ߌO_v'.URnt7N|: xRg5Lr4=x܍Ix7lc(;zcV}[,#1_Z}j^0bjXeo= %e}^Гj9 Q!i\1cVai@4u̖T6QRI.N`ȏ67sYV^T`#)EX7_Fa۩Hk(5Vi'2V@Q,# Jh=:wZ"&Ꜻ# M#$ djt{!,:9#K̏^KE aZy˕FBr5 Ӈ[j +T1,ty!HE+9PjgOzV NV[0`ou|+[-0d\ܮkq~35$`Q iTlP&qxBԖʷX YbϙrRͶyWbvĀ 8*F0#4B[;D 35{|&T)b+t@HmOjX!Rz\ZK@)ؚqRmُ aΑ|Eߋg%W %p6Sbm9`hk;tt&tdX~ fy8sc̱lz?" $cLK2 3Iǀ$z'^אi]:&)܂>;4e%s[e-Nn0B7%{a؛s<>!\z 9Ggp8, o;:^5A҄ABWK*fڃ ftN4:\q~2Hc2{N ag[Ƣc\g7Sg1tf8+0T^tU+g ZGK!KIj|s]yoƒ*Baaw'/H&>ڑ%G=G0}IuR6i"du_uWWG NVF?'⬽ ٭>3ݕ̘B[ϋTcO7sAzKQ^?@z *qPOB )k֏oGYiȄZp=7Q?V:d IO ( e5BIK˙esmu8Kc9힑2a/ګm#{U^&IbpLY}G$(XS h*D0D]Tr m{_rW> 쒦])VΝg(kJsRp鰠F`s{#q[3"8kB"X:A"D)q C!R aEFӖ4f*퀢s٘-yQKpNMqC-#DJc0z&NvA4s_ \T"34VEj=ITS$D 0B6U`hZA- q'm>@*015*蔛))^J|H 3f G+YUwhM'^Iui9Rk 9N4V;fj‚_IN!- >X[{3"зT(L^_k*X-_UAyq 7Lx jY7F) j% Q%JМr2Lh<hMCkeIл<9Vg&O[WP1)d0$Xf/]̦Ek~}UnϦ} n1_^4Qh~We:.~5*39ac`'.j(1<+KiE` Vsv"oهЃMM7i"ʧyv#O ed)(&3͌w8'D&6D5屨uOc{ylrc6{ ^\ZA 3 ˔f#[%VD7k|S"I,gY׸ޤ\?S #?n1 >Lh~omch v9X/| Q=- 1ՑV!J]$.6F@E!HG(:8kBd{v١\z+6A5j|?s7&`wzOޘtOzK}y\ӕw!5^ ]9~L'4zћHv5-Zj5^wtwᦣ)M}qse[Vz#":|6[tloX̞mI))e/@0 AE7 W]'Y>1up}Iȅ"ew4Y"k֥^L¢ H u=c w9 Nչ%4#=\ZûO8=qfov-*.}AIJA?/]- C=z鳭'mͺ{'= Ms{\)vR#q \L{YLoDZY}@߄{n,ݪK˖]cLהPѣ!ISźX *ytcOL= 0Ev2$.;&C-F λm^{x'uۮuX%fx=RtG.ÎᇺU=HgXTP*p؜ڞR*GbbLg|S]Cٍ#,WHw5!37FV#!%˹ʁY8t֐P8WuwlJafd 0!Tsn,mav:lza^x_+ܤ l% 3G٭`pE &b(ทyPcbm';2}XV}<;yb[d[5d1]!:fPP Dj 龽1S.2P n`b6f7nx"bt~߾U;߾@@&}*~"g4`)$TƂd#"[м鳤oyX|7 7Щ}'W7M*Z| U?O`3~N㿙Mf ꛗl?sn)nmЯuwe;o&&_i^\y3Y! ,Յ~QxlC597󻰻,ivqfy碃V|ͮ./{1qI%T8m0؉cF',xG%Zs|&Z|JsI3wݳ t_,=!"AZ@RR%zhbi1g;Ja83}$'WWZ-KuS\A+L0ޛ[+cYPBZ(ᗲW+|_]n?BSѹz0_[f~ }\K58/fv߾Y+eD_n XW nrg@N>T>V߇*L[`VyeJ;̢G/;ɘ A ۊ~z6{lw.oSվ~a߿|ķ ƶBл}?]I@w+FKd7ZJ/vs*я|ֻ0L𕖌^Z2-Q-&yh8~%DZC}H_]|>ގmFf|4-Fbtpްxbm[5=c"']Rao"_pBM[oNxz`hdl3*?L>92{CI븢2xq=Ov*+sC@+F5UY`qMG+w.>\q-= *^t 0[lӉ߭/iL.(;&n67pqta,yy}Y.2?V`M.P ʻgo: k`E;8ҒPaaR"n mv6ާ@ %Z7`G Vp).}Ԗx93[{-aU([>Rmg.y7]^-g`"5q-S!@fhicҧL0&-ӮO^ܩHSYl/>a m1vY4a#|sVk/;^sXeOUŻ *~ "/Tt_Wip\Q)v7oh2K>h%O L.FAxuAїe~:0re D/LN8JK0P>uB@cam"Uf,{!?W1cM1cKn>eN2mҚp,(0,{C(0 q Q2ml3ϳWr5P8P46#ĆF%16 Kw0";3Ef1ng:v{#7?.9ٙT?n0u0>5]~#r'ї%ITzr 7S=evrVjxx0Sf$RVQ Ɯ~O(O |/͌E8nۏedsJP)nl%HU•};_Jys>9_Tpɘ3q>g~VψA+r^=(ҜֽiMVM\xw;/_gC,0 Wӵ~mEHj_o'U+;/ !M= =]7uÚQ"|@F q8dTv2K0f ]ٿO6MN^lM6yG.x$w,|>W卟szF{݊Ԇ]ߧ{s~p)~xw?w?}L_?C\IUlr *KB{~}7 ?Kݽ%λOy^00+8\ʌ;N|M a˲fw-Zu5ҵjsЯmN߿rRס!1Ӏ@_k8zjh#-E=V1[S(%YTp>o7O{Mc'p "<hͅWo8mu4ic @&#uF.HR ji 1祶E@錺$aQ!9,75%>  >?|D9CASa`tpBay2&T+PeIz뱎hb1Bvt ,ԛ|x' ϖ3yc %zru-K"gP梴j% SFheȑ=.%t$O)0/6dEBA_j}ji4P$)|>:y$)8U潶 UL>vz'󄅘dLb&!O8?aZNX#X+nƣw/ _e0ǃuN"ALjM<(ÁDVHu mm":Kk} hE~38jugVg9؇Nmr?'*_󕵅{CErY#R9XV[\hzST)Ƅ"rmZ =HM^,2 [ 8> ;HF_(q &E.ru㯋&nE,rx.*=n ~m?rmNdcty_`ty άx*.^?fᾓo4[4]}_G(v婠}F>k(Y7Kl|W&js8N& W}JW~;ۛϷ3 z~KUyp=ף=R"z|E/U"v>|#J9;ո˷?>}L!@b ZHi!)M)l҅14DA!GKמur %&b-%&jTÑ'J"A&nr(L¹%T<) YMsJ"jnR唗Z)}6gkNNy4.jg0XC(?oۮUp9YR]Fݪs/%-ޣZ 9e)zNE!LpȘ$NPOUd^zb\Uipspg܂H9NF!|hsTBy*$= ļQV̒ʚq& HȑjPIEXfJ8$4pJK"'gW^W{@g]X@Hz N55H3k)Rd0/A=Ա@,_AAN)i 6T $ h'q!YL:Xa"J- *B\-HZZ~2N3wZ7fFhy.R0ɵ(5R*(E|PLjƽu]&ƐuEcˬ̔sLՄB.ގwRӆ 1J;[ei4~HzY(um:M$&ݎcCzE|SO|8CFPH!*YˣG-VkdU@8bs!uc ?;hBeߢ̊7jbj2?K 3̷Kc* #f ]֦-y-n+R^->õH]D Ǚ-$%L_ш.^5]'r^Xf~+\&ʼJ%&Ss]@Kn0 ODv ^,jَP&G'V?xFcX=O7~X}S{Bk ƥP9頭'y{-E J, $5PĪ\(y[aEӢMeL48<߁oZ|Zx .8^o;H`F_vϧ m|j~ZԺ,Oӂa ^9_aPh/&ZKUJo/*#ZXx y+zP!ǂAV YJ؄\ q#MnwQ5Y9o~wخ_i/~9d@% p>wqKpb&hhF&J!V NtZqXRYﴠ0)G"ED]L IFe[)XcnjHPNMw2#༆(_ gC<8㴎i/[2OW"jsy'<鄓Swxo})9\}{9%r쯢T ]I5F=:!9|ʦv [^ZB)@=p۟KؿY?  4/  F8Bdih\tNlj!*+B%]cGQ&.k/BwzE ~w8Rӫ߇/ÆcyF{al]?pܜPm \PKAmZ iᕱI0\8.`0WmP*a#bZsX:\Nztu@U"zŅsa=76p*)cP I?{OƱ 8##;"A^bJ~YƸKح7a0W~`>I_ #B EĻ[35Î ;lSNN*Y>[ӼZr80bQG"`"RSFDD b FрG +c"ҭ.EN}1,A-hMˍUZRwaga󹹞\NlL}Q}7r| NcF1ZleF͝QFGR :Jޝ2x4ĦigO lCz4ߡ$8ea s:hr"u<'lsν #0q~e1~4gU2k>;9.Ft lne02e]ޕools#Cr竟my{Vo_`nikJH~=t &._st:@8x5kJT\xP(/,Q[]r>p}{tV[?_26 }i?E 9܇"Vsϔ%8DkZ (HbK`"%H<LlT6QO A2 c*5(K)µn8zGRVl'H>oJ\/G.5p L=Wl.鰇\ 2FYG#T}\0s,mIKR3|%gS 33-dZGyփwE;U ;ў7Τ:Iť|o&b9U5HN_Ur-c1 Y\`SXjbҹVSO7bePM]Z%'Q-=;$*8ʝ7I")$b,%R=x:FY^1Tk%iN3U,VNZoORec*[y[h;_򅤬~:?o;.d!ڜar1҆EWT<GYŞtHz3HagŒ+!m> /qނMM?cR5mtD[z-x9VAB WWs"{]P1cMd\zjyy Sb-nת+qfҵ/%-~ӟg^?{Ugz(:=-'O^ O,GAӈ) (: ғ&R!)l Sh7Qǀ9y; $$V( bTRF18Pk8 d QR F%!mܦ6!)㌐G( oTi, KM$؊"9Oklt/ۊ:`40RDs$LI&({Ei0)wD\2Ps9CDA@ge* B*fYwB̐Z]-KP8m :[ϼIrl+<#J EesgK0-0xB1E h 1H )c!.D@ /Ua4`e)i-,A*9-]Q- ."WR qAVƤ(ZXL~‰;Pn| /N—.j(1nfH>X5&v;o~$p!cŀHP FL* `'6T噵 hjj= &w89ic*EoX^YvE^^.bm7۵)ma,}4G)ԓ/UBhO,j:(U^j Ms_h'޾'Qyka$$J## bX6\iQZvm<[oYR8W~\f݆Ah . +kHʂirt/ncjM, EJ)y  R [CS) ۔#Hie;\M8߲k|ą(>^~j!|o.RD"LT*cb$XRS4(1Cqe\E_ӟںVOSޭ,O"z(b0w=c'X3S0:RL݉)0bBpp:տ@0! `sHprƯ@HϝWaYLci#L6 _Hp]RTX̾?ͷ{t2Z蠗SAtFzխlk5KP!a|~kJM eOžtx 32\eټq"d10z]\NqK-nE@jד٥ΐ× &H4BbnH&0d`#0Q) I-ޠ7Y9j&M{MenZÅPH́|~]?'8MOX!"L*&37>\?<sW71Qg| OIpKbC3WOهoupn7^i8wa⣡|'tcR(.5фvFۣ0V@va{м5Mls·;֖ 䤚u_6 tTNycK7Ù͖̩|cQI'1rcU7=?{ܸɂghL*dݭƾ}$ϙGU{c%ن,橲G)F&cD"Mڐ_ayYbJA+:D!ǤlVhCƚt2ZdQEy'^X®W_c\!Ae$TP ~= $ }*PߛP1E]j*z.4; #e֝bbNOALFǸ[{ZoSovc63{3m z8bE dD"sZj#uPe0JϮF\AЕqG' "E>#a?phs JȉْW 녯ܲm76ɾv"}ڎ [VvIㄎ΄8XQd(]sb-@Zୗ`]Z$a+@yg#D+fIA8X^2:=O&_%|[Tlt;nJ'Hu&H^:bӱ Íա5U&SM]P: hgeT$Pe0TvUGz 8+d x̤ iQXT(WG eRђh&B5(qq n WOs.Jì;ǂӶ;34ziԿZ56q%HFΆX_R+XQ,T >6Cw(u;-նe޶FY[ grdH٪) "E^FuKj)c6{)A!`_b’tN)CVVڎ7a ~g=^\_ߖ=soVqO8S垸b,Yz4'SmƧ?җkȑl 6%)c.!#$`e?E4r4S8ȑdHTD*BN. U:-562 XOQMqrZ (}ΉAB^dHɻ"uI2:x6Ɲ͆sd串!j.~mMV|raQ) W<&[9s`[1qyfz텏mx5҃ }wh+:&T t .a.nx, s{}ο/ndΕۜ=#d#%muh}7^.WN\Ӷ{:8mEsK5t,eL+PFCDnSl~ ԋY Ӆ`jb;-%@BJSA#u;'o+-E@{9^n gq{.1ŋJ{C7ȭ5MXv,au&ZUߴrSEߏ^:mo-ofǖMz3(f O rBo7Ų_qxuxDu5k{j**դzI3I1'f,>λn/|Y?ij"Zdz*?S6tR{VT0U w$;;sdʵIMlهOWti9ύ\h{j=;ϓ ς"٢v@$@U֌rd.1PVP,eѶ{wёEddC%)b:1z!RN1 Du;=l8af٪&/.fuWԱO9n ΝwvpPC֧_{vSoY7Nq.p 0u0FUF" 4 H.+$|,~fTkgtJ,fc(A##U99k8Զ:X) R))SElgAvJ+dA6GqBP^av1 Zzl8GYWށmVij!y]:ɲ8*k)6^N#XTE%6JT 5&Pa,R˴DCŲqvYxrl vBWZ j5Zt;)N*︓xAbQ"m` \j^:sR³P€ڃp41J5g%fA~/QnKaČDFMȿld9)#qL[ 3i[۷EjXڋ__{R}Ռp e!K!c/.wQj{~LNҎvjȔaX2yw'6Է\nD{ NxP6[t%@>c2dBBRQdk7) Zx'zɖ[]X;\G,.v]ͮϮ mF]GX~HJV.hBj;{OP.L^iX1h$0fg*A8XRdUVH}-+!11&.`AlұIޑyBX P-}N.Ubmڟ@*б%I&M4픚5JqoEi؛:~z%E_dqD}*8SɊU<)嶔.3}X>GE[ :Zt-D M/$d'{,=93 m9YvUY8◒Ą^,doş(zɛ(]*DOUC;Y.cymߗb;S,6>V/n)GHOqqqy{+{Ы_Ͽ̯rju/vOØ>ۿПT8n+_;mT8;u턴΄sɐf󧫛*a gʜK_^+3YD'%?^ջu8!Rۊ|PDAddϯo+-䴼I+ rlfMn.܁3'iE+vKlіZ~MVSl>_t1)3zZt`һ;Rp?_j.6kmZTڨIj0Q{y⯪{LwC=_L$ʉ~92)B7_5~vܛ!<2`Uӟq:~߱[@{?mu/GckX3Nxi&  `+D lDm[W^<'4n_VGݾ^;jQ`^?߻Ûj}_ qpMksrQEх^/OJzpϫKC}6(Sx{MrǝK?`cl պTTq+@nGh}-lco˺e/UCcj{K REO"nS%#iH߮א#j%M-Lmcs ^oE^>FQ :k,]a땨&bURJ'j2Ety1wT @swY%t|[r!GҔMuB4_p#Qo>I۰=Km+Os5a{vԙ7MGtLrjw1 P[HY 2VWW<@wEN-wr7`R,%YEB?55aUR>ϋK" iH!Ѵ:Tq{}\,4d<˹mn̳?eSD}dfYܣ_{InJFNXIVL $ﱜ,gF&[z/cx]44oyJ$K%OzeDMQ>8i+'?~r~s(Wd@d53 2oO~ ꭵ2{?DEi$,H1g /qvV2&*-CP}Cft6DJ&X16"FfIM D yW8•g;Y $P$x ІqoX*uAs5W<=jHB0:g>ͳt!zWEkR.g=WA[tZ\7!/@ILRI`1ȽH2Qr+A,M 1zI tua!ӜN* z{pS9,`B#o^:]H%2/o8[BPCH Hk;r{0on6=By-B!5l1Z"8lE{<y.SMYޯ}ceǜ -fCku`Lm9- A$Fpd%/i9g^ h8A6U儅PA$A.O&tB]\_HULGf GcrS^mX@+8&DS;iroj.߃=%G*Ȁ6yC[-J1( `\&@H3ZcOH5O8c sNxn mD2[6l2۲ɘ{fv 8ݘ-=lbVR 2mk^Ap_k`oTRRO>TJ=)䃥!$ 0P&Kz5g7"8))2r~Z[A`kvKX8M\6&h-ʝ:-ޔd*.G`HM3LeQdl1AE\vz I-FJhEy%.#4";4/1X\Z:ARrB>u 7ؠc/HǜƴGQI.kH!h%j` ri7 cI-҇mD4mдAӎiZÎLN"Ghӫz[7߸!=iH:D $N"h]s5R҉l)9 V~s-]Fgki}߯O' 2u!Y$pE\&T`ýqզx ̺e$qYR)jfJV!iU6 rc黷omvn~?6TnaL\5==PtH8n)_++ u\z4(v4}vq5+Fp'w}De$`ŭX4xtdAbC]G}Q?u#K,瘙b"kr:mlt΂"kdH p!yL,+L6e :ˬT6f[ޚ{zJW إ$W;X+O)g;n/-_HGky'g̈́l)J@dd!*uQ]Kp(R JȜ,PH$\YͪF,ĐKЉ&dN:rWyoCqTɀĀ{Ad)rzT6 ئuyheߍ'lE46@Nv. #ALch'dya<7eYe^L+ek-"w,35܏Zˁx(褴SȋJ3-!D Q a&yHVr+4FɍT%,qBr;ͽ@ȬgoxM|B jriވn:[nn!kˬ$MzHtk?]IBE0Gi8>[cAB%/(":qr;@;ޢS[r@ _7rXq*G95x-zz\HgVq[`zW #y9ZzÃt:%Us+MM Od I%F !9>Uϑ$:r)rd9%QJ)m8t m.27{R}=rs`%x_fćY|Q|^u֮iq| gK} ^uR3_m/5n_Jmp;*]\^TH%k*bܽm{»+6wyvy(Oo\wJ2sC=xJ}[|EE2,mHF>2黿Q \jm<7\2#8;WϞuPZ &k+{";Y Lオ~M2IHWsa G13^ZV+`4#YBd9*e89ޡ_Ο{)OT`>نiɵ{q3W>Q"X1/9 KQ,9mXF|68=ڠ&uYȍQu[2 dcAz*:k '0zRB 0RN~>|Ң֓}ď)$Zy3$J9X#JdwyȤ m 6h|Gyr{a:M 0HZP%Tђ<82 wh4"HO67pSOl.%Vw\PbA?Pjᆿ!ol%8ƟK: f Y< >/2L|WefZ9x=eҿ4;3L?{Ƒ8`wq/!k;a~8F?%U$EQMۀ%jf8ꯪZ=Nc3_c\{pD5:98E> G'JrN?qeM0e_8ן?|zݏ~?|Lz>=:DS|j]h=_vdr?9ռ8Pϻ ݕ~^  C/̿#?kݘWMm>-+:?yӢX޼iaUQ&|v%-9v.Ihןքhy-@kcneʣȔwe8՜ԣ$w``w'S_8"<hͅW3X?ӟ/=gJ߶S'1Lsn66d2X'mBd,U s^jQ$=utdn V<ɁW!yMOHEAWa`tpBay2u&8N./"g1ت"'ȹV9k݊:pbaukЙiV~Vx;wFh׮7BLR a+@jպtmZDitTl“Hs{?L !8(Ho6m(oB=A=4`C/å,g릴H2Y1X+r(EI6WP"o4(x!$DB\Oim$E)2&1 SW:7k\\Ɛ)'(v1GpVR,&GI% kƙ4 GA9&a a:* >$TZJ9ֳbP/z\59$RBHzN55Hg&Shj`^0PU*_?Bgz;gm} 6T $ 'q!YL:Xa"J- .WZ A-?T?I_&Ahy.V0;)jJTP 1B0"n@j8ڼ+&/M7XF-ӆ 1J;[ei4egy41OBTm+ji-z|(WLDU>QX6(:яCs#Og-DZERT$%΅4 VCc~qu(QLBYX;wp񒦽dASuZ i'sBB%$ &p JdN/3Nj|ݤY+jf^.뢠|< zب௱I__~Ow(GkN^zDJH?bܳX9|@Rzݹql 6?{*i,4@SBDzKͫH,gD4q9a(*i%K8U &PR7oȣ@h|\z1 K.Ɔ)V~Ieu8NoO^l{Eī>>^=z"9Z A*a*a*aFD%lUͭgU [U [U [U [OJتJتmU [U [U [[zg-Δb8Vfv():'zOp8K6@&j1J"RRQRdF!(ԃ(oVJ Aau,'T$ 芥$of&ũ$,dZF"tm8ׅroM0Jed+ʣ͗џH] DǙm$%LhD8Nv꟣G+m?[0´agus9a6 =*葭Фs"ȞW*1)Gx.sblz D硜fAh""Z/ejMeu0 o oWXnų/>{+cOQx:K)Q jA#ldd)((PsWt*o坡F: K7m >vz8+Ql߬տٓ#$\%ul6oR]8d<fy34˰bdƽi?3L^- mo ׉|4)ux|߆_QaØáѮpBP=^Zho0.i iif8Қ~ܿd C=yCqZuib8ptB@,TSʧ~R>z" &8B)OJ@'N`thj$hH6q.@N%eL*$ <#:j>1#Bld~? MrA1~J\Q$d+W=!)r(RrZ5Uտ"MGd5F杪e+tJY;^ډgcdғl ~q#g}U-9YHGSp>?_HһOԉYiȄ6cX3 ^!*I u`O<\\~BryCrC:8v6K'JH k`v^gJh#HS/K[  -XS{$Kl\Y֗xMi6'5ĺd{ >Zyɐa]S&m-}MK~5?Cć(2#RL2TDxN3i~:, L+å\p, $en [p0Ac%`(MF'aR/"+냉䜋0+Cʘt1r yφ`2OX/}Iiʘ͙oWqh[3s7.%}zNt yOH9r$ HDe2ƊZ`!M5јG"h@QAsg #*SA'5Tѡ5Ff ^= XzW=ڧ(ngy$ϒ -w=ψb2mfÙW<;mfs2 /ZgE; N۬dz]y2Uݮcg7wD/LyzyH_}:9o#[%YPϲ R\^[MSѬ_5ݶBTM2<~=tQZsϭ͏S0:LzЯUdž*  e"1f$eQi[1.a-+t[s`~o.3OѭR0HAH[*W[)G 8Feb5*S9?b+.ֺާ\]ԼZqw>2އ'Qyka$$J## bX6\iQ3:R")9MG ̂Q* J3q"Ay'AS(Z> W( VV>nuekb t .fAR mCP|[+_Zi:*Y@Xi 5#R;K7XK"CmVU]7ηO?N︐s1 ;V xsi+1\Ig9F,IXYnR qWifK${>'#$|^AESgəL_cX;+p"F(t5DWRP;7qy|VV@P&1\:$UXM.MWIBX|rwL>;u734!)}]}*-ѫWWh(r۪ Yxji,>(֊yt9R00uhc]{}<8{D FBXܚ/~u R=FZI|O ,r$NjGb|HuÐa$f^X>"6 8VAP1z<{۝B~juϜAFt$ A|:W |;cɎN"rMdEg~kǿ珗]??w/1Qo|{XuR|CXOI=|f~rM*o˧Ӏ9wVäG}՟ ⫪ulo=[V<мajh6C&g]2|qnr5uy\j: Q!vuiupXyc&))o%8 b`@i2cF_@~ ucEf y4H)ʝ_< 'o۩Hk(5Vi'2V@Q,# Jh=:wZ"&j,aCrG,:9^`. ?vFH-W"L+oШCFar7Ӓ)E>i oAQV֧QGWID᤽JIc?*%,-: A*0A/Tp*w7ꤙ▱ٕW `*S,`3, OuA*UX^|fp IUp;2 `*ҥ\jg>NLj9+j$Y|,q_ݪ_p 50֗yQCfz-+/>_ӛ\?PEq p,,VEs nLڳΜ|}o'Pռ S LBF" =@|*$6?t7>=Tlzr*]t_#M.!,Ȳ.]&tYHz,$gf~a)UpFu.ĸ ү}ĊMdJrqOdl?8>+ &~oޛǻ֯ _J8z؏`yxzHn"WcOPKmVo&p;Ĵ0=H`6qJLSp܍ U̓aJlN|Cx[ IրJtVQ_*, U~T<)@RapWp;/3 |Eym~l ,6 v2(uF6O,Gg/$73X!4;kd0͓js6{6KzLb\ug ftȊM*]=p74p]XJكPwul,6wS)f1 ՛ZUQbyB<SA/"R1l0IoNS?Gf׹Rnxȼlfyc!LE P=XXE6`KvG Ryfrg3FA>x)}K|kj/+w>|eFhCH(NЭDR:%^**c8e{s\WJM @Vb`!6Rd2a+"Ǜ ~\JΘ @<-npyB5[Ynz>7Vy$' Wc@JRvBL^D]ia;w3챎ξtf@4dIKre00U8&8dg찱ڂ|Q cA1J# IC0@T. U f:?lq@) /. D,/xT7 |X:;**@DU;͊qn[&sJ $'a O6Vx`mjYm[}m9TX]R`QD: \{#@ |tm^ׁK x1* 7Db6y.e}t-:@FV&0!5Kp VhGc;f20jxrI'#6Q$mFpZnۨ48>p@j298sl#: gՉh ` e@Ao 2Zdpa,p0"oY` EZ2щTI~ L:C:jˢ9K4YF%Gh3KojPRj2+{уEX hߨtf&= H/x"5K X}1sɺ6Z43kFŴ)zIӽ]L糲2ID0um-npD3 L[vMXiZf Iwy.@GjFCkn0r7G 3wu xrH|( Z$Q%\ Ec˄Lԃ (0 DYZ# Rn a7U2k46I#*dO"$U.ܲFrq\[ἨS⡾xYk}i3QUqZi#e֏:rfF8ae%3f@hbeޟF`JnÈ9>X2Q{li3φbUUtD 0K1!;Fff%d=1&0"pŒc'4\NAU- CFW*p] Sq1cZUB?k盵q>_nL*m,M 2(k$WEf6 ]^>]pDF[#5hk؂f>Vvk<}\~ukrnJ (r@RHR=G%tFR@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)Hq}R`gG s%⎹'ܒ9*3N@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H |@P;?J #8{j{K=pU՟&\;v?`LZd~{^:~G:~Vn$•G>92ն5?vdc^P^rSWg&/ižzP  ?_er>_f :gpp=H#b|zEilݛ(]j_tV>(]TsJp=IYE8ټ,BgpTՋ͋˗Vy4-/?K%o_*竐B5C% C*H̷*#f|Š1%띠g V]}Cb<%^l5O16(Q[*}-ze=M\~]8jҾyߛw^y+fuݲi|mS! mNB)5[Xz=#r˺-|Uۨ|uSuI1j{h : [hקk p+|bң^iP_k+:w;5"n\B72N`[E=)i9'{3yH0*$rWp6i秓y_l ~qAR1\&3S퀃t|mJ .e={^o-ީH[37m^!ZZ )ꊠsn=fͿﴇiY?cY:89C;|~1 o,gwsw=f}iMθvʫ+{Үm &n,ʻt4/$*%]ZfsIYLq]Eu'UG6E.}0+[T71q{㾛pG~wWإU{W/f3p/}+i>{N?/OgVF\vWmZJy)7by߳y-]Y^}sY!;w4ïĕZ'˫-ԫHdDVv,i.uC.FBI1EœɨYHm[QG\8G3|#y͞Ϟ6kxq;p~Tv{Mv|gUqXlMWw7O uf9zu{^ybD ;;Kˬ+N5XﴚBЪT7onr- ] 5.F~ rGq*#oR]tZӔc<;N,O{YPOPfzAK`Ҹt: a0OG?Zwtq|{Lq ~~c{E8uċIZD݌b󻲜N-dj,mU~RN3yq}J7)l:fd'z3LzHȤY 2P'@HuLa(b xzPN|vVoyHJ=QĬ+7[^?AjGՋʬDqnެ[y%$E5/'g $VB~uת..O?jk'Kwᑗ_<`#Kv^}\qRnTo&W괺{^g7]w݅/ǫ^nk-NÇ=;VMnMf?^@>{8֍{Rf_)ƆICÏj1 1ټQhڊ5g'/zytU9*^VrCuEl E}Q.H _nbN!?y8V UStPx6'izv wxۗ|m﫯{wmH_a%U p*k^rf\xںP_[_c%qHEWIgǯ{yWo~;:A3p) !I ͤcQ8˟^^ -_vndU*~zwqzY:0yDHzb,-p^~תZߴkNצ[O&|~5~INp[!6]~6żu̼F&Iņg(4c)}p'W,<gE-p8/@ vtm} T2U!Jv:/]97uuY+PJѺjV-%q!G] 0z#SIش=$AFC7-AGC(σȾ*ůuc,F*'7lBŖ۩F#lM{c$6VnuV;/]"`>jOV!vrţszz~6攐&{)K~$jbQ1 zNN&` :<mr߀Q谑F Y3~lץuUX|h5y`?~ݪ<[o!:x+Vfܪ]VcOm:vb@bFom ^ AҔ _*z P%#o.{${F;Mӳ(p7F11gff`s0@syV3>$"灁hM;OVSF0 a[!ymjYmHtPQֈXrgzfՙ8.Ohu.\Pv슉R9۞,6k4XlM6 BԿ?/ lSIJhe ƇN4h!k9;0FeT.+Y\ ΐְKHVh!XVC}O]9/6S`$,N4|$ 1"q8v,G?>z>;8Lo>ExvP#]zJ7+5 ioPakNpo,9,=`QD6'~$jp~Y0; vw5.)Us؂O. ]ZJ5< n&sݻ>g8ؼ(c_U|"e^эiˍvUuljn#fP+p͗K=-4`OfK.<*eZz͖2J}͖flR[kkp\ etB4e1B,42M& @9#'t\A2̱7 4 ;) 07vuҭ.qy,[0LۼQ-=YZSų8p u+ผJF;dl8rD&M;/3 RC|F YŬ.DObW'8 dX>p/IR Ke| gΐ5 Q"b{C?hdÊ guXI斄>"< - )5Ż1;/û>s#T!#ࣇ{ Yf\ߖ52NlmL5Ω6 o,I' Y0c6:;n ]6o@SAe{˗&$}.~0* zLzOwVv³z=;yGIͤ˦) O|GIYsWv=l1Bڭ^qbgĺ%lTe=|s| dC`RfG%KrB+I% 3]wi*~D8,[/%<=cX],{Dդ%/u}7%I|S=_ve{\=c?ƼcYW{f))V7yۊ^5NQ Z6]m Wxz#Ƥbs/Iqnw[.u\=N. kюf잘Qa>淸Wݺre..OOXlΜ4& W;&h Jx1%*e4&(q>Q~aN]4 j}m&37աr3n;P(5wvǼ'|__vÌ9jXMjf3깭Gv}rzt(WD6e>UkNdǑ"dLN@V[fC!Ȇ`$VI%zWE&_|:F!rkt^b nxO:lRm2)ǡmvf@dNX=bLudi&<^)$%8jhxOR $gVZR%a!)BRtDfXY f689g(E9xl5) XRэY,lS6:(a1::XG[OP0DrL )h:I@V""1!xeZm8qj^J=I<u\L'%29)*oe G_(:hT\"-ѕ7A*RUUzI(kHѲmZY}sp IeɚѸd<VqK&eb, VJc¥lRsJBh)Ţb:$)G1I^xd#LO"؝r+R8J:QnW-a4OgC1G|$d'8VX伱Fr `r}>x+ׅo3w IcT.W|)+&i"HN6^ h΄OɪD"oNɹؐƛt'v7,]T#',V-,@ȒaҺ9ɥdfTlII2؃DrCZ1s4L8ߤ[|ğUݼ˘W ]{1Zd*Ih0I1G!e`J3o杀/wMku57y4_0\bǹmo^ǼN"&=ϤЛ2oв9F9H2H`'%["(_8j<&/~+?oי.N/?m5`9, #?nRvӤۍM/ڑi&idOdag5߭=sbl=StM^J1ɵ7C'MoTmtIM˳iV7ZlHz1e|6u0~ɝ_oOΦLjzwpަ\]_u0xOJހ&?5>__ΆZ ˗Ljy/Pauwn4. K5^VD+S0>4m4h!L,FavD[rB7IXMfOs7GKB.i kˍwiYDzٟs@6ZQmEKL dMJ.%@M)zl9@Lh Y$31|?` _Ge* K1.#6AA0b٨#7吒dLT%` f皘"ېTFI/br"!ګa*{ݪ gioش*U׆jti*} Uή.?]t ,_&Z\Y}R[Ak4%W){Ry~DbB%X"~,;B =b Y@B9dԾDF0辪dhr{ʑ؆'H"11[/KsgR%) \&!U[2V!j,4ee J_]ͨv3hu(\ Gݓ?nи@beۖJVB"Ѥ6Aq6:V2ccu)`Mv ^|Ie 2p )dbb I(.8,s58籞Ǿ Z'۬ %>*FF|K |<ֱ!sZi=Ӟ4)fAy\+mHe/vd&~8A!Q-iM IŲo Id)RV0%qcd ǝW%ԭ?NjڻGTDP7o_WvRxsJeeo|y~ quT DmEZBZ)%׆LvNy!<,u3#"%|*/lX@,X걳գeGm0HB)O蔄^ikƻyRԡ"q* RRƹt[$)N$J")!d`$PO1&$,GT:XwT]B۔ICuWn_qOpw>'xɢ ?7olh*xKռ|H.0piDԢmJ-*5j%哣J<4 AEF(f,ܑ7 䝡j TJr8ձ:~AA/cLJ䉦X$!T] <$㼤j`AbgUrːY!gW'Uv2brR(j 2˷9BQH,F& CagOr+ԟwsny.atH40PqZnh. _̹y6 m Fp"PRئoVX3E}=O5DXjdH*3Db0 qeM* RV1j'U;iVP*U'ے\lMQ* Zvq)*|8]nMW;f=(.cpˀ@њ_vIh`XIO" 9S1¥*3c2swOim9#+q/ŕW_\+qe7ŕW +ilrV#2DhHM T:44E=`bdeFfވ.:Mam֮MW*cMWIV٦+tel_cئGbϓߟ%F㘟;;;~勾J勷7ywp*x4ZZ{ߎP>ONǰZw~oWfmVݖ_7Sh jat$ wz1"|4X є5 omTDɨh:s/6I\:.\t`ȼp/ۣ3ݖu |;lx_QGvq;-6pDO~ݮ}}I˙ċ 9 uu,6Ki,7jn0265/p{3/p-1!w'/Wxa+ZbW-7?@h1HDJyF$TJ$,U>XC5͓S}llI)6B%sc})>0xnx;y2~+;/wvMB ATV^9.8eN"8i'67AoY h:y1[reF^X]]؅424D^X!z'2Id'H@KIc9 ̬^\^-XkW1%LkSڠZh*HƢfulCAJnSr+: ?Rj{"PHJ">Hpϑd2 Mф&;hye`uodo_vs#BUPB1@ QHGpKZm I︳@ {ΕOD>@3 o=.lP*iGػZfњޱ9i3DA1&if>0‡|Sɉ ׽샙4r_|O /vZ47ubLf$GKj)ps甇7|ڙLQ kFpWV S :m eŢNQÃ`8ݧT_#!6S>5'S$Y™}9k}&۝^|1+ grw:{݅Mο(Yr11~{aCn4M{v3⟝ʛ«ٵP ;_hGr{>EԆ?Nׇn _.Ɩ$[ڒ7t-6sqCA0u: g0bGDO<:>68CecsAH&ٳs K\>C d.ݡͨw_cKa/U&~h_n!.o~w}]~}jywx+uBȑ87Kj],Pz8xzj=T-O+Gy7r4MGGMyeB5?޿iYilo47>M뒣M=U䐗ϫ4vqTA 0 /]>i/Hėڮb QJ %Y t_qF_>/;K.oGYA[ץ0{)=oNkycȄP*r:,{2.PT#sMvy=$[^uRx.R0dE`/(Ajo'h s#LX4V:l=*r5ɶ)&TgLW>ÙV9:(`kՆCOG޷r6:l\\4on=!\h۲Oݓ lV:n?J4$6RLSD[ZgL+rl=z%04W2͒qhΪch\I%z5L9/Dp-wUmI2s7Ļucķx&7(LYGӆ+{$`tfb."JGy5;w/-.nfGd tgq]uK/dZ!8>3 -վs2aJf_ͨqU]$@Z+yZej(^L E% λ-]#5׸A+rAjC{M6Y~#Zfڵ%s73ŕ8vָS iTB !hA m61`$VKck˄Be= lGBϢpJϠRDŽŏ'VcVZ<9s}E;PuLFéH~kV̳5#b B ~P0L 2P TVb[/E%sY]uOr^1WS*zU ), Mkk pZ)Cb*!r96()Q)FXBS8L++@\}KY6Y.f^-Ho6mQ=v 6!L/Rl<3r`ĉ<>3p\*}+[J)u=^^K$W|k,[KyɻLeIƤ.Dj7ΡE)0˴y-UWTjަw|~˩Xle:c6BXBqY:sC3؅CbڞZ罥Behm_Žݴhq/6-k d#h݄*Ovgyz8qeŘ-M24 ad#[OtOf~}Xi1f|H>=yжU-|Ը_2qjytn8wi Ymڶbic^x}]]W;%˭w%ɕȸL>~-(xGvcGE<3h3͢p,0_.N9"z,DNWFhruڕyAfO>L_*+ή`l韊@9!J{iSRPw)| |5T.({WaS>L]CϥMF*mJ0KnNnd =!"=AZ%RR(Ŝ(aHt|O ߹)=] d'҂ݭ%oV@æƟj3NӡyUOtY=en̞0{<aY= '&f& \^L5(Vzu!v(N"kyhX]#iK4]z:<(c>z.wJ\nsŬ8\Xˬ86;17T<0HeZrUs@bZL.&1}\Ѥ LqdZUS;ݪJK:хvk_վʓdh^]6 6̴q8v87 aK$-^ "#00WQFlV[*L;1"-x0sdp&) 7{5mPm:y^ukB z݇)Ԟ y㍝;/>fc>fc?P>Tkx0{O9gox:=V)n[jro0CJf/h&8⣙4U^Vˏ-v]i&K?=JUUFyqQ5Wjг!\j%h8r(א{5!X3bW0!lU|USgW J{v ٕ&sЛlUWsaW ZN]]}- vKXTGbW;9rAjGh< JN]aԞ]91GrZr-_R.> V%*gSx|fnWW; _hq(oBg{0C=^NaQ4@ 'oPL30^M4,<6%\tSg %aҳæ ЍI:eWlUsaW ZFN]%(9+dWI0u2|woJ|f`^ ԛVAv~r8{4< h#u@3e3*r+|),/N):0\ƥt\ߖV^ܷmv[}Vnoշmv[}Ŧ v˺6Hc՝MWYYm\e8V)5uR%>KidYǽA@sFŢ; _Eþ1CpeL% ;; `o?eW3;+ {ۮcup7Y0ϣ?wمj܁08aD N-,/HJ$CR"تHS:t$XDթ (WKM)ppZFeZfd8zϒ  D+|>:zΒۼDՔmVZVf4 |SVhi8b9o #3LD@id$X!Q Ն+M#Jo9PI`%\27Ak]fR MH@}4:&'#(o̰4N+!H&; vҕsl-T^bm eakb ËDdJRI!`B#kH: =s0@{l)Ԇ H0^)C^9ϸA RZj1qk8 ֦ [BzNJHyz ( &v{@r0梈́?x`#?wI>5#rf"00 IҍhI1wƅ$L>7O].Փ}|/On"afL&b V` $"F(Lu.Swa&x{F:+U?@0! sH\&ɠd>))qX|Xί8|.7apg FRryX(%<1pFOF/etid50Jݝ`eO|x¯'?][VΞP-n͗8WlEjgEwf8QeխIJo^nRd,#{ DƓG SL|2=T?9Κ6P5ZmnպY"<FKS ]*\ WQ' R[t V۠!kG Y㚒3O \)SOP.F_Q?ӌue!=rAƕ-2ίfxQ*f,f(1xf23Ńɱ"b[P.i m}B{%3@2*AE 9bD!xX#6̀)z?0A!fH Y "w%5XQeuӢC:4PEGC~TtଭDҚԲ0]M9VS6vcUk=A6sG24:vj G (Psc̱l6u$i/9pQEH(򙖜eN)&^IGIBsH!iX$=eL56o; :&"gm~ n0Ma<$?p_ ߆C.|wE[xy\No뺚ZϣC5^ Ԓ_}RKbtEjJu$JR'PS1,b)nq]qՁNc`bҙVʼnuXTS}:U-bvA2A0ɵ r B($@b,P=h:FY^1Tk%iUq_UݪI+9\ 1eE3=VW۲BޑT=Eqvq>oTG)~Qq1 8 06,3 #mX qO.;򓨮3HagŒ+!m> /qQXImtD[*JqTQܦ;EwX+UTз)$t|8VCg+.W/Wiýjp2u8d!Ys-\||*TL.hoS}b[,i'qEFt$c*d !B Ei~opUr)z.8iB1S8 @.7Hz4KTXxd!R&R/5eDDL ` Xy$RD;V?;e&Tn:54x$t鶋n7j0󙹟],zIV}ҕvۢ [8r.KA@ge1yC^FRڝ뀍1uֿ:ZⳎ:Hik\)Ki}4B4$?CTlO4QVGL56>3Lmgάh2I NAZG)xIsD5uGxo]|=A))=Zqz- ͎ g\mxkkH^_ m[z^SVJ;]Ok\b?}ݵԫqDuQ\h6e9mAڤRBQ D2IO*$5>P 5ى&I%I-$:6eCz 2;tHIlQ 59TϢ(X+\$yQAy 1 F@zpItc=k6lԫ/߁ZSGLKTdR I dF'b,Zljq9TPס-Ucx_3˿Teڙl$ 6b2Iړ€92 ^WZAv?Ik= 1:Ԡv1%V?( *ԒWMދ)+h&rc^lL# F>bHl y]Ƙا."cP1AϏEȾg+*yp~RW<5%ڊHh1x7Bj.XmU7T5Ǣ$d_pAy]XSن;>dYWvrץSF QV =7rҡ}-ǝ( O6lo"g ,Ez0zSR7t>%% /Z(OA]RVT(Y^oVD 9"R }> C(@Xx:@gUfB[|Qm/gM-jcKC8Y${EBAB&#`}Ga͐%-R ֕]aҶm{bES3!$" 2)<@<ȦOj8;| A0 Jy},Vg'tX%bXWXdN")mKb!aXMuX%z$$'TPx@QEo"IJ[F^1?cǤcHEXSG",+(xCaђĐE ^،Rmԑs\=з m|pR{ɽ[?f_>`` k4I@8 E=VqpR((Ϊz_s؍䄽\QGY&1KO,$F^צt!erFxU55l깶6]x7:>S:_lV)UEB"Jȁ5!m_#["\#VDo AW&{޾ewnFdC^Cm ~܋b6<;ܿyw9':NGe×AS#Tf`2twlrLldy#5jY_wlq:9;OWzWz>օ*Ħy|EER nms2o.wi[=G71m\zg-83{ys{HlQ AFj5A-HcH*9x%X0EWQ[0klͻ-w_`cA%QjA6gW2)'F"ekm#]2 JJQBb4~U x-#ڌ Hf١yqY]WxJ ɾH!/6+Xt=L2 nL[93hoſB#z$Q+KXIFJɤ -HFZ}$tvXsaRm y<&T|\)C)&zʮҚXH͆C:(Yla3ƶu[xpr5-Ⲇ7Ds;hyZL'yӋӳo7Z(ll D6%CPPN)QJ޷-@Рi_JhDU5 :k% $*"yhl gƋ|ù<nVұՆV*42h!il2 20c"*%cW:u)H!75Z5TEŒ 9r߄&e$ɴб ouMvHM{vXӧlywکѿ9jz1IiLJj\. g{:¹=S6҃7>Y2/G?]3yO_\w|ڸ8Ÿ} (lӝmM־mMw65Mw=lӝmMw6٦;tg~RglӝmMwmmMw6٦;tg&$hǫlglӝmMw69;tg٦;nglӝmMw]lӝmGlӽMw6٦;tglӝmMwBX7'SZN濖в:9^vr,'@^51eB=Wox_/ Ooozw1>\^q ׿FnxFiյON ֋qdNhY)-l\n g?=z+7NyC]nJ!1ЫCoT]ZNzZjzJ0چ+kw}_'o_Sޫf.dkAk !2XrRLMHge9Ga}sGwDcR&a0CNF!dvɤ.gRWm @܊ H>ܿ:Ka46iR{tjdՕ#O_$V{ӣ_>=x!Q=a6a;Ǔr>LCḵ|@m綥B{Ŷ#gBZsCZ3's66JhXȓW]hBXX&oBI8 9daxy(&H)|.NF6$Uf֭Jk(֢/>b+8clR"IRQT;ѥuulTݵBTgB[zƇ9K{9鋋җOWfZGs"ء3Ai}d| ؾ N!j\D f y0%^usD{䥉:#ou tpe1ed[տA7KRA`kN6Al[`#VT6 Cmx,Z]zC46@Nek!@L'|tKUG ۝9һÿYPT(j-HQ1~2=!gh2.[gW&ڢ $Er55Yek6~*Pg#d7dh )nThE$! XDh ? gv3ŧA O^O׷=pUo}3t|Rty̍Onӿ?'#je+TJLNN@t LX_Lۼ_ 63 Ef%s ^med+sI~}ڲ%-{ڀm"dUH~yCM\vLcF1ZleF͝QFG0 5 TѡotFf zr<gqU'+_`>EyRv_{YbɾD=ωb2mnùW`IyVhx#Aӈ) (: ғ&R!)ւ QhonQ 9x_fćUԙ jPu {k "ג6H._d:QM u1xAJOa8G ˬd~4 d%+]7j^\y ߢFou}nngU+e)~-zM9kݯM"k7<~q}GZxnoH9ElWkJ^͏ycWO]xP(O}șNM`Csi&5{c);x[ҷG?vvwSD:lemb`JqˆZ"YHJ$-Yl^lUr'h݂!ap)18)0pZFe)A@[/5^#:1tFNÜ)+;Ytr3)5,DN><1X[ $'!P VH*eJSdLh wiP9Q;"E Rr'Tfi%1  AyA}`(`0 eүFB^X49 1O1XsTDGdpQG֐( Quzn Q9 +-`!{y< R [C!H6e/H 6` y{850dc;7~9G#ȱ˜ ?8׃/C-DͥPbY6\ 1LhXqS$ +‹f,n\7E)wȓEECL4dX3SP KA)LOg/Pջ(85!d!2ٜ\dΊ,qQql|Fs;fWP$(Ⱦ?Udvyu|SAVɺ]Hdy&kX+.BlaJFYY<c{׳qv" 4_8_cKbl"`O׳|wFXΖʜw-]jnF4=q`(|<~zhsp9B+[%h}u'W0? !PHjyٸXcR\Sl0*_gnpu_|ަ?ǻ_83L?>{F`\_ a='4nQ!#f0u k 8^nB2Ha|ex0|݁[MO5 MS4-u5-.h킞Tvݥ/Tb;C`U\mڍk@4}>Mk~Hyxe-S8 J2wkc`lwY2]+ [474~Mi]_Jxb,? ;,?¬v+OXwWZt:iu ؟zVp,a v9E f+c1F۠LnL )1vegΖ=m@F0 1'9 V[CDh Gl[XSq@v-D]sZ>ͽ l2pH{/#an\{葬#NDs\$ '>@Bz/_SWG?Óuxu7;??zqz= +r|yf7j/Qݨ|aчEA@(Kd68P rvii[1xPi!f)DxwVEoc˖:k9ayY",nݢ_ۯFVV~ZHZW7POT6+swfדѴT@wYy\ 鴗>GH(򹖜N)ޞ&ާ{Y:riKv7mJ%SbKi>(-OŤ1CmNёQL9W2:?y6)np09sH[ &Oi͉A6ճʬ{|Rn,\(w>P T;j|sWL#,8ZI}oguO6=ku0~t;A[_ʿ)Ҳ>P,jؚV^7SBMQH>nJ1G_vIkEa1̼-o$Vrv{[IͲѸa>'[ӗf>X"lh "|(rl5;T\lk3HU2Us]nyqPdbkEJ6GTW4tY/7`"{&CD{OFc(Q"! %K9 Q Ƒ.&-, |d5pOSE;<3nE$$،\5SO+3铁x^.Weayd gK :yd⁅b-47,/\Vtիe'-גڵ< 3'ն d*t xQ!ixKɴ\b'ʄe.ϯaD{)uL /LB|47iYd{ik'{&JVVZWe|;jLR%TI84_[p7tk*5DDIo}mk}U#Wqp~~Y(Ɗ;ގ'F?(oO/=[|6 @V87 N|ӈ}%X q`L 0n:mRĸ)O_:$X:iSp͆paiW7T 3xKavPC4w#BnG`y.Oڠ+XMkxnӼ5~>NIQN)93ZY#|EwyfkLXf;d_*si7XH.UK ٨k÷e2.L넚7ìm-gcD%[㣓 +Qe]MFUkOjNL:披[MQ"W3Yn|zN;ڀU^UC+fk=Bx$Qi{Ek={1t`$Чe3a) {ԟokm4$H`vkWbJUJIyWa߮޴tn|= lW;mZ/vk!eH(Ƕۣ,LRDW >3ێnlK]snSEGLwsTuww=3w1K|)Tm+XZ2Lo]}j/B]hFdql=mhIG^k?{Z{]{ѻמzi3ZpƊ+:ɜ H T$j=s>7y.@ L5~B fN'\.WWG]S=??L?Y=նSq>cqyraZgʥ::?^7pa4)jܼ}MƳT9If iŷ7#X/ܞ`XtMyd͙QIY\3=)ox:>:"n+_@YjM" ,ӯ#T'yM?Su}nF%Tȃ뿽{]lpefWnՓjmDq~g9ýbsoܻżٷYW䰐'%r }*\v~\Rޓ=9"|8Nw*,œQW\dP0?tulշ(U/c3p@S|}ˏoiȡ@ipҺ!s ˕f)ĈRBѣ uJTVPIÍ#"Fn"U6B/9 M_6Aqyq<B).'R0̰@/yc ֵti|yk)_s7Go㝇t^\S92MK86X;Oa_/Pu\p,7eâW0<e([T Aءm5b{eۊA4$HagŒ+J(y%($6:\Ym0+&x92J#"( h ~oƮQ;#g3(h>$x6=x[i'վLŨGTQGT^4>z0u`j`j01UF EWmC$r z*ѕD)w]wfQv+/oҌ''KϽK~t2QDG> BȩFqڻL{rD2=e2gj{H_!% ws^$XoCWgTDɶrO̐")I-ǀe< Vja0uhQD!i8ys<(`Z$b\6 L%ʌbVhG (ODLa9M5>t@u&DK֓Vh *{Tw)AY#-X2%ZfԆX&;PN`&΄ ɁFEC2RI.R3HR^qB<%9%x- 8MQg_| l+hȳECoo$U|Pai:|H.JRd@ v VD:>h d48zDK{M:(eq95J*dMɤHeW:F.x뜡BD Dy &5)*r#gǠAr2rrU/o7{7;t875n ë̾QkftZݻ\^}u.ilg"76UǮ3uեxs׹0=A2OwWغsi{\tK6m^ͺClYa˪ݻݴzkvlZnw7{9w7]xv_϶-n]®kmagMHk6r ۜ_z/MiCYe"^e3ʲ^Up$rR)ND%LZ^ؘ6l?+^ޓGc"~GQ|^nƵrW}=CקXq=gQEOK\zyluIJ 1%cZ$FFni `цD,"<,ƨԽ bc;{Le6g89 NRc+pP zs9#"SGSIOm Y!70>eLhr=cOi5y?#O\LQA:4;+z,w*:Uk:?|J˝ "D锨P%1y =:gurW[6i;w<ɷxtWKϏ˶v={&Poqp3)Z);̭ɘȉIU-GbDPB[1W{0twzc~T&gr|-f>_jnP\]#q L@XսؿE]I9芪o]3 ]Տm^^muUٮn72ά]Mfw([k]M/7G)7TĕAHcГ19(J)iJQ)b%+d ȪwNooq? #E [l:|3i;I?vg>/![.kKϺ:}f≅b]w߰LԼl:XzWUdSҰy8+;m emt |1ixq2 @wSPN< 3E.<dZŽru~y3X:6|dJD S_;6a7m{Y)ӆM*O<5f]򅡨ix[(pR 6 q_v<RDbL(!N9q\{DfQ%X#@Ls}3x>dU]|dꩀJ1U4DIUAj P͋X)|Cѝfۗ(~<(BtȵX%¹%!Z4OBrh.xaQFPehew)G"8CaL( IiN]0"g;:J"EѾ S[Uq]u@٫#5祶;FA%R !)-#J0I.E.2&1% SWW\7-N)SFLP>h9Db^+=U%x$Q\ h2ޕ6r#"M6exz` `<Ȓ#əx_[ue)n)XUMI=S*]>X0u ' x9eR6q9 T!Tl=p*Z#BiNScN3G脗 ;ŀhvbhx^Qnc4,#9M4`sE J, U(*n'o#w s͑ӔHM\|U}}$i־y!5ָWaCm<7*i G/0SX( RְypE2\ɤfj_v}? ԛi_.#*J <}}ߴ)攅1_"d ~KpnjrAY8WLsE2O6*Fi&rrؙ2D()P5dN2md?4ǀ3І:ЀH d3|zd]$&@U4[o2Q->IG[Xs[Wy#87qt"8 B\9dۖr8Иlzރܓ[J&ͿGL˪ֵxsIneOYhX؏?1u{" ,g" ʠ;a(*iQxsȢJ7V)"C!\ Wd 6Xj(`*5cc.lL26Յa]([]GdpYHnG~3jP^o}Ukles" ( !XQ ibhx)F90IpM+'@6dFs6T^r.*É622Wa95Üq7'5%jmݰ֭n8_!)A! R((gSL3ͅV̀HhNB*BFdH'Q ` aGQxpz4Mh4ꧻY1FlL>6ՈahZjEh{rdMtϐ 4 K18]B$l4%D*@8 $,e=* ٠%Uo[xglj{wnh:X]*?}VɝJot鋚T~fU]*pPuRsݝJcvp3Ѡ^v&(qH*l+$Ws(*SľL%7zJi;]UW`}8*;WqR+)wud-zJkŞ[@UW`P2B*wuЪW6LQW\E]ejwuIުWWlWώWWl^Z]E.ez>>FZ9ĺꊭX6}w:9Ņ|߹!=ܘ8턖vGNDq8HW`\5 gDAY㟂<6L0m8z@j ;ڛRәܝ.[L{5T~j!!J$XAZ!]]e*+ 8>DMחHgӹԛiOfݛ8$i8$ɀ%uڻ|spLp%=I:q"@e*&LHgd(Hcg0Y l伆ruj j~t%|uƎ-^ޚȒ1Z4bnNLO})Kt>9@- $Y(!1BxYtB"ӠQh(2qI0w)Px A%-8dF"Y Ꮭ% {,<ꔿʠa|hrvC!=P9"R at6WMGot6MGo_oN[3'k 4PDNv5AsW-۶@[ -ЖhK^ 4`|stJ?0`eȞ v A3Qb-iύ^=)(R4 O$Y,@U`T)KdMɤHE'>q/LZ0 _G_>3",K«w\;"6q+mBν~ԣc$}c[ϟ[U#;%,[ͲClY`$wzkoZ.`>h"+JWޛΗtl*O e9}iŝOwmmvi^^{/Qζ_vRCjݣ/'kB/e+`Dygr9?(L`坩mkF]ո+q 2tÓoX(! 1; !}vC{(]exLҽ0zgq0?}uV}#(h\ԁi)c*QEE##K4N'θZM͞M\h&6F <0NE!2aZ!.(CS0$4} lc8Egu}})VW<=EirG{UN^&FuY!  Q$9OuZ;c6zuP(ov?Pӝ١)%x,$m@x gD8"ryT^|, L,n<zd7YW\eO%)EEkPZqŨe&KN,KR;q:hX+/"۾o}WK*k! ąTx ,%%Xj!0O8˭1*6U;Hu;;y҃O֋1q!XS*d3_nPZbњ;#!;O*8I:_Y17ʓatm}4`FW_G].;+ %y="OU<$xy`Lfw(;j4/0|#&rlGxzǣ1*39Y%+yDg#>\M.Ȫ]InEtŤE/s(g~`C Ox۹"p 9{6o_NOOW' .3Bs&Nƽѱ_9cbpf٣ o7V\wPʢW;iҺS5ϻ"orr|="9%(tpY;9P?܌TK۟1g@HHɑ@<=U0~ ;!Bf.|N={/z8MjGedӇYMQw9(> }f+u =A1@N0S2kY_%_~7o~o]P/z~׸90>'T>\@0)Q2Szq_Û:giyNͫaG ӏ f:pؚ1?Ζ'Շ MVZ50nuWS\3zRWT!1f@/M޿?~jXh#G%-b AJJ(;p(1g{/8"t'F5^e~hG97H:Ias봱 :i#R$cJ4R" Fݮ#SMXS>zHvv|M(!yMOKQD)k3;)`*`.`;+`Ց c "&H&kZiy,ba =),I Q#obOtAr4g%'jxٝW UX8Q$/PvBxxk$jQW$Y(ך!|2 hAҟmQC2LV?.툅OK[ jX!Ѡp47KM -?{WGJϝ-A2h3XlӠ14.UbV.,˕YLf|/ȏP߽4Av$t)Q1pTUa hHgGJڤeNPE: ӥHEg/5{tg)}k1wNgsxwPުFճ S<[/uB"&!ֲl@:wR%n\@OTl|QRR')/0z?he+ 2A4:5)J jYJ.Qk *!S:[n`` PcPtv^E*oUE5I&!f9pkϜ)q~{NGOF|έU}1^]:_s'Ks[Ϫ= i+UJF$g?S2dt M1E DA[+>KS"-y)AI/$ǺIa S$cjfُGywZlú9j5F?S,o+"Knxq w{t!dJI !1AH;eoT8ݦ4U0><'c ۛc'Eyt_ߦt76X|qK Rv_!zKvYO5~~Ry! We#nF 7A6 2+=G2^V5A1JtQ*dHn?!@;g+$ַi<,V݀/EYg/SE ;ji>u5["?q s̗zD\0FP<3(v1iI:zvPQ4qeodѶ^%@LjrCrGOnWw$7{xS.nW!}IskʼnTG, O܄qw{GVIw?iiŞk?tO6ڠՉ'nR]tbVg7M=sa]]RK3E:9A*K6(\jd煛s0!TmU5?g!mzY-)TQjRR{o RYN7ErO+1k?ao,Oͯ_⿯ wr~s|B7Ky/pomZ;VZ!tŸ[fջLxtf'5g o.o~Z)^T*Wb/)=թU]|?閟"q>v$7s/=,/vu>5>:Yfn\]g}7׿M|YXK9n~5Nn.盦~_~W^R>D>e2Y <bSL)X;2v04Za)MA$7m :Nw樶O.!|h8:9r4$d5VtKϒMIJ,:Q1/AmQYDf`rLm`Qեqݬ9'{<uxysx$ceŮ%t?8=rCw m5 U}2zr$*AMPb'3<+6 ۦ@ nOPn0Ո}g8(ev]A(EB/"S$K (kPl"ChVlw%HJنZD5H^).L᜖SdG͚QV::&O>zErkC/RcʦO^WkZi[Jܢ'_/PT|ߟP&b)Q8iň2R`+( B!#l>?"rwqv$'I(`TArָ$/ErV5YgAV ɆLEePWHm ,@F"xg͚s z\}`A0ԺEtPFLCTdRqz;a`'B(Zj3+.qh@K|/lR۴ De3$Hx` Y!bbZͨVq'm;lglԨ!%`A򵚷ԒGMΉ #\G1(ml38J> Zjc7{π4::F$LB)z3=y\!9)c~fD[n_b(kf"ǂ4Z|<,qƪTpM"R&DfQt nIkcFS !c]x3t4ca׋:0=[)]fCמlKjOz"Q'kA(6y! {EbQ϶|HB-t2%A^ uGU*wT stJ7D9@8Ik#S(Q;A5*a7d[ֶ^I^lcwȔd5K\*z AH1&)liacqN"5}0`xl&ec37d/2Z5_S)A ,yȠ$9_ leR6B1+ ,H^kesJRL6w,?.tC][[,a`$$"*WG`D$~p+YI,@jѸ [҈!a,04iDY$aU:Pd[EA"p FKS%xc؜RM42lD6UvLӁ/45|=ii`2M2v(,v@AwQY)țcӝϊ'5oy̍4'k%Ƙ%'犲3kR: 2YN1ZV'04fk}2! G6966 T'x ijwx62F3#fd_[2%;6:jEpH#xw޾r8z>٢{e[#GY2v~;7/47|lOXi_`+oE ^Sr1u t!sQahQi2xpR NqG"yˣ6=1|r=N޻}q}um'|^=kb "kB'  kf,CMug;{U׳h>/81,p#q[!Lζy `Hgm-DیylAQ|h5nd-SPI&ȸqJ& m49:'0553yM~al8fvrl; ˋmdU]Lm+ULRWXM{KW?QF" ZgKM28TJ&.@ƪ,}ShѕKMAs/#TѺ\|T9PR 8ʶ@(Z{f9@&/lfBhu2KcAM >KD/<@7!_H4:y^"q_fW:c{GWUO:ţ":uA·\9%'FISzm5C*vH/}QC|[}c 4}$cQ$ f“A` :'Hm&JI1q߬F)cj#|r&uיLs#MjFфlrՑO2d{/}{۫_]9jX}ܝfmbFN̸yTvzm! K>%*EE!.])`s xic(I`\[ي1RimB>PŘ󾀴wmH_ewm+@p^737al`Q%$'b%=b,v`5E٬⯊* yZЏ(m29o>>YyVN<`]yRiO}og` ?;#s|;)iÕ9,>:q`(Y% \{Y,3,ѷ@ԛtj }Ax=^EbuV V+I 2)Eǐ Ã*YRHH"d}O95ի><>\Zn; <$A S 8.aI JA ),Q>XWI}|kz;ǬRH c%I~. 3}Ŀ-6q_o\fOjS&K5y?!'|R&kLG{%e7"svpl\?Y8:OH੫gfvV.pd%Q&2c~rA;hA6.Ϲ4_>a֖x; iVV쯦Z}{89;?\Zpr:>!gߝ<`8Kͧ6/8)AZk(U0z7ԴM3#Y~>P?br|~!3m8A>LV]0V|1}ޟ|qOqґ@?ٲaa ,fq|$FL82Vb9ɇӫB'&gQ[=lF]6Wz? L2.}HXpKc_W,S-9V:&̊N5vLbǣwG}:`,8 wsjMBz?_\~7HB3_,_n+r>^}f@V1.-G/o/=|w?o>jmho6ЦY7 9%$'|xjb C< , Pn㾥Z2:J&66ۘjs"N0<.Ѳ?gNyZXդƉ,5֏oD:͜VJi(VٕPo@I>Egh`R֑$jVc-${-g1; l~̴ F07&uKV.88ߛăZ `|xq\Enדs;[i&[W<3o| p//|Mݛ;Ui/?xi)x6]=`N/CR?Z*x>1mڙ3kJ'EM=!(jg!xݻs޼:NkCNEA9r? I>mZvz.|Y:Su!Z64z^o{L?>{6O~g^io\Mߙ\}KcoZbza$z5? ʣLgCNI] n%<3؞yS32l #Gj#3}x>i;z]f^Mg*19iμFߗGFa 'ezC #8neP,yK&G,awUlN\;G ޢS[r@Nn6dk[#g=ߋ^I^r?lWd_i\Wt4Kku9æ*g9FNIU‚Py,!`aWj^s$(Nfnъ ME RɝV:,]eJ; Jzj`CLzeHч8rYJd.s>(ީMe9jM5&ړyi}*~T }dzv7?mcvM)yL&r%iau g;> 2'RZ/irk:YSKVf!eͻ]wzn&=/\򷼿F9oI,u*Wt.֢--zE¹֜Mtvw; ny}*jU}AstiC9 X*\r,jKDf˱ s,n?`|ҟ5@1,fP2UyGᕃ+`!ΰJ z=9 n?ׅ7aag[pq*]7B0J^ $R.gPA[47@ot炳 P'~Eu4[md8ܯ{fɯ>m\j>kΥ'A`\&a9(0_6+7;JspIW5L1R܅1p dm2Hk,(?ɨ-V2]lNl'^ӽ{͖{-\P?y@OL97y&'@r&b.H9^zc>88[ZJO7n`X}Nj켮ȱFYZą{/q-EލOw~*I@_¬$ЏjZ"{9F(k*ay(]m%f$ u\ lCܾr=hŶR ZmbKDqP6,PS"GE:I[tA]I'm$aVy8򐸊.rNR%B*ȬSa2 q[#gZ.~rv>j iX|ZOvZl2]~( Zjz\ʘδWSTMe<dFm',uR7x {]3{/(ŧh a2iBy E 똔%A RrLYGTh g&1cҾlA[]@cN#&(Hmgoнsi  /K B3e4 =/|Tef3ĊgwyyGPwQP;FAQ90J. dɹ,40A #s!s[ܽbo;>;"!JmTEh2kY/(R*$j*yRh8mdJkDEcDCVDZ;2FΆ|V/f>+6jqj^(fP!&fqB$8 5QxI&Ρd/##7;\i uܪ5msHOc`f<0][o9+B^w:f[`fw <-^cod#$߷غXb8Tw*~XQ|,XA0wqR'-)i@S^r! y}˥IVAJkf!&,ILXԡsUYQѪNT4YX^x3j jM2!H6x+ 5CyZ  L3!;k#BݎRS4s1j*7'na*QGF:EmshRA#%!RHmW4[:먦!Zl{5ZԤi,N=azt%q2'$tdM(&R.4^%Fa!@Fɥ(LMF^u*wOILLmp*0FÙ(x9)sX5YQmS6:,M*XGP['("S2I4 PI$72N&fcZ—sL[ëx[VI %\3B孈9aƗw2EX2 HUL Rkp)'CiPxczg/$V ѲcZlY),|,P\dВP4ODQAX ʗdX V^2anKؤ/i(}]*JGn25HBbNjN{u9m2}|p2w-O:lbɍt^RlWb4/9٘#;I0Cd+gD6+,s ur^L #~@3ZLgcgdcJ֒־Cf7Vf3ݹ9a}]V=%@y@֙+i!-J(r2s`ܕB2KU'QAP&@숚*涛'1Z83 7>v1U/uB -Y7/llU9j5$IN&eN#LgUw޾ۥpd֛6Y]nrDߵ3ue,fXoDeFm._L׸8֮ߦS~/j+6ъ 2nDiR&h2 ln'&3?;2Y`qjrٻKպVj=70+I^@:%fRB֌V\ gWJoy ;GO[f’gv:Qݨ}$5(֮dQO2= IW aKA&,g+KFda(񟋜 m B0`NqU9̂6=v5q6{l?ĥlJҎ]M;6g沞W:Ԅs3Y(̼!urL1SUoJ! ]I`J`GQ1 vWg?l{#Vӏ]=m;n-DAmI1.pUe* gLnNck1b$QJ(y.~G &hQRP$͋j}}8[_.M'8?jZ_gť:o\hѕ,T ޙ _^V%Ve!h ) n!jq.vpaN(V3OuFn_uE)EAn萧Cfg*_![ߦ‰G Q,3(ΈK/i3MT>m7]G_X <dq~.L?j)ͮ:5ڛav4O;z@? ۑ+=er{=jbx#m}ގsUG<^x&jUWkqW$wWEJ;w=+P'eoI|@MJ 'S#I?ƗBݠdJ'to(w;.鎄K- }bK[FHN _&0;ڗ+ᴎeҞ LJ h SL*4_ ~=ər >-;6dqCxG_>M;Nh޽ M|5n^.r8w7].4{s\[i vrpRv@M\x׵Xw8H+SM+q0Oxh*\冓l"l&~~O4ؓnֲigԽ&)~+ϘhKkE_zBm(VtusCg=o$`/?)@7'ޠ`ZbN&ɽ&rMa'A.fVr%FtV5ܔ8.>3&h~w{vڢ|4{s@irVQm2',՚ 6^zK~J:=W `'սslc]}O.~&w6*[$㨗Rh~^C9a @`裱&rJk#TfmTDlT҃;ob= xQgO?/Ң.e֗Zre&Ԯ;ZLbj_o{ߦAn }qUS?_û :Y?֫˲zU}.L'JXЯ%t6$[Zh{vԯu/B &CZ" W7ai\Ҩ@^H@#b1xi2*0X@,u{GJ6p2 @dmtΨWd|4Y#Gxꞣa],Kuf1+D8 cBkK k&9 1{.x3&ͼl?rx_c@5/-y~ωL;yA#P<\h856 17 2d\J42ij*'r3 V drg% 2& 0^372{@GiNӤS|tDSZqy%-1ttpoyT~xLY5f BÂI[<TV1ZWt"K lȲWu1+[? cPѩe hIEF+u*QvG% 9O,oWoyHc?07xӶXlRyM7;??昱MORpܺD!X\?ܾ-Rfz jH]E@k`ؠ-'"[wc5oGǒDx]<XͥKlk^N[ hȍs`Z!qV^ɂJF,1)v6M-|p5Pp(P8ㅐIĥմ-hY[ rag˰{&ɺ(\֖"WRz5]#@riۘnbŴw%f$tVVj+dY1!eq 8A_u >-ᙛӑ==Kc;\\[dY9X\H%}40nVw7рBXQ4~>:?M]LqQ:`t68v ,FFUyO/)*\åZwQhX,uh)4j,GKϴ{^fʛӓVٵ0/5a;^BӓFա8M^HZM[斎̪#Gi2m0FbɧrmF'=Bgnou]v=+'Ұ"tT<~AYg`t\Zu/d~h^!~^ǃWuKuF#pBٸ$.4(^dB|I|=)}|jzt\;epglXhq+4Gh>!: ?_~ӪZ޲irMͧ^ jV;v 'pAuXo*͍121ƭ16l ɚЩ4͗0\<.>]ce9T$EBc37?i:=;r: ry.ȥjp^$V:fL-gQxF5 >{Ց%W+$p"͓WnBy!s@![8-C1~d%yPpi1zV9m UejNN4y #l,BZ}mwe>LBn0VԎ:(YVX̣FpJ%dK1RYgONc=#׆<>ڔ-#hCƢϓFS~,M@=TAB]pVT0r 89x蕴 3!uQp@s[&'mfp9LQBj^p;^>.A S"E#KI;Y/]X|}0.r*6)~dYp/lQ"` # &5cDPrkCj2xyc}%P*\{|'b[\ [5~n~ƿfkrvȑ4J1: @!@A0\ vj'CZZw96WڟkՅ.R]~a^[Υb䍛;4#t4mPeJWB,mV ׷QeJvD}cg241/?kfK9/0-?aA&5>(Dk@䡋[/{ |vN45ذ@z>ؠ+O?&/jyup\99M->~y;hwN[A&нWe. =5ΦϦFEtvAWk6/B3J J<ġt&?d1)(})gvN8PIe^)J0A Ǣg@ - &ٰ}!+g.Tº` SZ5&! ex%=EF0I[ju`.i7*-h~兼M_LIoʎ2)K5 ѫ6*{0LAHp%Jf%y3ںߴF5l_rQlbIer=1U)`2iQ3c*EҬ94YsFJyQɒ[5!+:hPk8+*Yg'rlVPKΎȂX6!'5Rӟhw. #RG[C tOd/z ~bgQ2 UȚFI`A"O#F 4 W[@OQp2>N2'h-D@v!HB}&+JkhƘh ,X!8()@XiBUcNV@7XekMӖ)፵=& A g[h0-Ӽ]0@1`3 {[[JSR5EwȦΧ*;y5;%n4>D e^C4d0#A@%!J(/Wts26 ]jwޡEyX/2ܯ2ln9O 4D[H=8 Mn|)7~:4p %e݉!ơLN npCJVzn:rɌhd34q~>2!c2 ̈jQW3̄aTu$&ΦvKPlȀ*/,Bu[NnI(.K %O(kr4I1'DceQ)aUw޾ߥb4M0ѱz q|`3vŶ>o(Q ٵܔp#lޝMB)?ן~]~ŖQSp^t,Ы?>eB޾{H0PJ'heAHXFm&V.f+yYǹ ܅u5+&s`Y-8:ͷILPV mu^C$~] 6^x}1l=F`Rr)i9`S^2cp[&ebx;M*92Yu9#-ܷ⭣tʟZ[%W.̹'/h.=֟qmQDp>f@Au2ijݹ~\qW}l2"{DFArN(yoc,AW.p]*2d@gL E8))PW>L:ifDáH5Le\;k$:L=|Yfn:YɆ(Y B=.a貐Vf1x^DE0@F&em=.>. V] ?Q;̑k C6NW?jH4GJ8-}ޕۿBK Wz3`<8Af !1^%F!)[C{/I/IKEnWS#[\H^M+ i"b2BfV؈7pe%)|IPӉ `A T u[Qu-{He61 Z<\ e*Qܨ!ENLT큐?(QG*,Q.+sP)CAwx@{$[$\D"F!Ft;!ml{I9IAָeDO!lXQ}C1r#' Flf]:Q:#}S&! #w"c$cxA~Nr=GVnΠܓ;P3iqw`qw`qw`X܁ER;*;;w`qwhbB_ ~!/XbB_ ӷoZ |!M񠾴o=6Fqk {Ζx(] {aw/ݽ0v^ {aw/ݽv^law/ݽX {aw/E/v^ 5L@]kO8]?gџWN㸺]>ZJ✻ 񥬌 !6T|ծ@8 ԝQ|kutg~O?.>5}z? 9lحY:qmJur)\ІH1AxGU akM@ A[exkcr;Lz.vz~gNwk(v6Nr;~ӫ/l%(z[o=T7G_,Ǧ3__P佭jrksf[֞>GZ+('y2O&z:-֯!9wvmͫ Gf.}J8]x1~}\z? P&W/?~ު-x\/5XqrQL=˭6Q_rVw3ǝo^Oի?O ~P #n~YlNde' >d.nv_-j0v7z0zcL*=}KmD:>ؙ\>>DFLpT|LJyRZ28tCFSy":ۍ2΅H©I@e$gSG'?,6_@kY87/rWx;~yuI5>{rGCv 0kkm&X,=Yj96#@8O_.Hp !9*찫*hkc9/?%R8<:A џH^Y$+b"eq85RHp[ɚ`Ird ̈pͮz o^u{q,7\ٵǚT:zGG SxKWX.]MJ?ZSjճYW%+,9yݼU{+i7Va8ot5_͘7Y]#Tl9c-z5|iΆ?+)x[WwX=^{"V[n6ߓm.d; Q,rm) H䤒@&7A-Xf;ؘxIEZhMq^шPvvxINjHZ LKSFF<0E"hFNq mk݋/<|R6 n2+|-@]`s gQ E,:HJX3Vhég nbv* )R'JZ*)MB7ZUF%YǼѹoIVa<8 9Y{֍-xj,I)/Xs9-$biBQI,KR;maee^RF“2r-DBPB֚ O%K2$f g54F֦*Ԯ =Bpk&1+^ [Ž4bqqW^ZbКˮ9KBSF$?Y ea]Z:`FN~^w̞Ծx{e>uZ8" Ǫ.ѩSh/\hJ(L5}6WK-J8j=!٘鏅D鸼 ̻tIU}J4!Sw|v.mH3f۴Z~}<uiZ3)݌ku}l^g(, {^l2-G.Wѻ3۟On/@D%[fXs3 X> : g0bDn^tǫIcnu6ȦVƊ疐AFHnXa0czD;Iǂ˕ajE&u߽<~x?O('g`}Է|\@w4~q~;6n|p\0Q?t+8]U4M GO̯O˒D5~޼iZެiaUNJa]ԓzޮ ш!ӸMmctM|X-éjRJ2<OI8"<hͅWY?˿`/mIb Ӝ[.0$I") QKh9/ (H1viª3㐼'#O QPGFFXL9jjW֚xSɔ6A,=: xc s Xo"?Ю׍868VTuh~U.j[9 SVJ=F8F8F8Fx$VGY1A7|I8E'Lu` N9מ'ݻ?׺"q^XI\@e#%B pUŵ(kkQע8E%zrr-K"$>8T9eecZAr3ÔyrٶQCWw4!WsQ:z* 0XȆz<11)g}ogLo>"/u=l~2OXIV$VXi}E~Po(TOR B  ,km54$ET4hΣe^Cj~2`=ҙ D8.DxP2#@'DLlxg>os_?zCfQfn0Mܢ%fm h"y/W"g%ۤ; ,|XB igRi6r:᭶&HLhR /EtL-KpkY/Kq3Ο=Y**6ńyٮ(,K,t5a~*|ͽ%KL+8>oe8_kQo)J\.9"*ZHQBb(P^/%#fsÖ<9؂-d,qN akeQMJ"FBvBJ76|tH>0L>z]אoZ`yVn[;gl_oK>ds܆I8Bs,z1N"*@ C"LYDVyHhF؊FkyI1^bL CJЬ%KFI##<BmҪb >P1@#Ơh Jq&P8s8v6Gb% Zd.kqs J?Ͳ`u 6(6bP 2{vsNJ hRB P܆cD96!?d5 I*3BQ=hyQ]:H?DȬ4@B|YVp$P P4O+'u{yݟ.\,{xϚKN!<P:1S"D")fq&Q-Z S<"If5< .hMG!u&PԶeu;C'[Km(,Bhq"ӠQHgQF] ԇD#{Fr4GIp͊oK/~1vu9P3MñV0.h/]bbF϶܅y;hEfu]n#G0ˍ?Fhf #(R&ʃ\%q3!b#Ol'νcdGӜF4KxbHa'?PATbCێ0Mor,;&nb&UE ܷ-/ ,^w+rCuP}9T/O:(L K2FN";1fp 8Ƙ`3%#fUY>P8e%)*kg`oK08"ӫgҫ ??4XF~$2 "PCeCMe^*J |FtN]@ko8Om0]SQ:PǢg4P@{Kp4&Y}&.d2렌,\ 2Id}2Rj9hw5j\Qu=0^ƘS$Mޗ,f*PFnm#qk8&&(*I"$gGMJ:0%%x\fH:8Ys ;rb: rƉ[!, P6J$qV=Y5|l`KG-_a,ufP&u,p #qZaT:Aow. C-S!Pڤo3ރ "0c!+6$t)Pɤ8 P]ÂvR6nI-iES^r!$+|L05:jMCLZ!8 CGG'ڊunE[wޭ-kk&py+<͘hj[Bv>b'R>cCO2j">Yo"cxvҍ!Su"2 H-fJ!yqx, oMCGyt50 ,N6~btޓJH' fl6QH(6Scl%656=6+<'6l]!`yk Z՚Pb(wtM5tpm+#$etUP#]X[DWXj-pekV7>PkAg;j;]MTXRzncep8/] -7硫\6t]JN!oǨg]Tǩ?~XYZ%OKzR)>"Fҡ_?]o<&Xlw$acB(>-ih MҶ ZCN!MsP" 9WFUZ6 J۩HWh\[B@u[誠WWJ;+ÿEtUkh ]BWUM+DiV6ut~JJհt%%=մ-tUОz}x*(ҕt kZCW.o ]B4 J]CGSX^ NW vt~D^Etj]!\C[CWl<]3#]Y%-+lZDWr*hOWCWrϦT)ϿeNW/yehjʦ Jvtuh3Zjn~F8qQ/qq7 enF?zX\y7A&Us00=U5)hrg*/dvw,w@r;ǔQJ¾< ҼL  Fz@Z9~Q?0YaX4閨X&߆ K="}Qop~QK''N@H@1d98yT#Ê#k4hYh)Bx*b]̤l)p9jYeC,s##A&%^zYr{3TYrk%WJK5v;z/A \%BWmORP;+Kx0\W!gnxR%>(񽯾_@!8YvuHe,MMJDiF,dEXR9E{-JGA_ |݇# ksŽBU`~}%zݧz+W17hTNe/I>G4L#"H<!S+ A?\{vxhJod뿠fQ S2q[_>o9Uяw>M9d}~*n~|tMSz傰UQYLW~=GAMDIl#JD"J ZPM%lw)J Uc.] E܌Xֻ 7Us_~%/7҄8ػ j\F@itJ@ex~\*ApcG=c۫@=i(y잻2¾ ]`rёWcxϣX5/ӘA5Y  bnsWg]C{|d뗀n0{4=W"ȏt%^Ћ+Si|փxZ%l]/j6zf8OJʘAE`1Ȋ9IS%za !]֥H^gmm2JD(0_КVqÂgwX,Ԙ(aa6ik{W%2zTUbCk `AwǪڼj'FH.y#=ZHO0d̾ԿΊd=Y'.ֶZ>Pm\-bxH]w!K5Nd Kk6G >(YE]iW%Y8gѐ}rmye/?24yMZ\Wu )g^{a*RT+L cWMS=n|gQccxE5ުkEU(v|.>:y&7EG5G \rG J#Hwi 겈]j&.7><:$!a\1eE͜D M`Rf*Eg-ZV|#R FNBӘ)bTGǢ"9 Ze܋ P̐`䲈 *3nw3"1 8Z nk2S÷7pLhh\ތ_g *z] =;"qwW49﵍q^1P i23IS,@.KPFCXwV%˘18 8}b>MNi͹ J8T9PW Vgq<nYkW`ST-vE;# ߘq}7Ǘ%WWYk9Qr$AT(Rw1{eg96rD)p1Dsӑr hBiʣt&;]91k՝:vjoM)>eLc:3#sEpVz=W5i9N-Xz-}C7.ͷUh2o>~G g&&?#"g쎬gYnn<ʧU:{7ȭ&%t!=3@rxO-cjdz{XɎLi[2^U`J2gzL;299Nehi,l[pWJ] ]]s^y~mi^v Av{aO١2W,6?m.샞2 ~'[8RWM%( £\dR޲{:;>B v,":ObK>6;NP3/[|YɁ%纄O%N˒,Bh8& .dKB-k1ku:f]c֭)8*c!Haˑ$$!+%giڡmvE?W5mi!xč Zk"<pK e  gY^ M1R?q/bgc c1v+k{!K #[ rPEњˮy]SF$?SsֿefMg̬ey=%P٥ᩪ1c2f9}@vN7° 9:=;7<ݛQš47X&Ay\n8(SQͫp:?m(+=uoi1 W W:khcnc\i#ӔaΖթ8@eY.Oë'P]^𣦃&(ڊ#ϥ=@k&2_*?I6$C5cic s뤍LH%*񨥡$0Gopɡ3;?좇8Ҝ1&អ#Ob >'Rh {ʍNh#,K;jj-ɷq\Hk4q1[EwR/.WsM6GQ;a:o0iA⭾LE\Rl*uzZAPzOߴER=%HQ蒢&)'E]KъQRԍ~R˴NGM^ Lg&iB (щYΈ&Ed)v0  Ot42aLk\@ DAK˭D 1HZV[*K Qflǵ\Ќn_!97'oL}>H 0W+3(˥_H_eQoGk>d)qN&2&2=kEFLWgx=%O8!tJw/w1#sf=|ekpwh!s}I7)AUƳ(ZYgi@U8( Z6{;(=Jߣ9gOEi|?+LqU_ GXqeb1ÇGT2iRT"Ǒ1{Vk Fw\axI6j3n ~~E~/'#Ldxţ#-{:i0ijUCW 8q,2l˯m6luXJ\ 3̗(6}K_]KɑgG{峟Lٔ-xvI8,~O.~ ƥjdU*a._(Ba*[ :ST&^NMI꽦a0x9O&9U -VG\jy ;/lhWp(s0+?]TMn)!{! vw:_ Wb4q 3C-P[~`w<%\Y3~TXf6,V%4(ɲj 1obfԅGe;[X6ٴӮc΃hñ9Nn8Wni#ƳM%*xMRWh.1Ǵ҄EF~P|1\%̇[lv̷ҩU '߁mʱ I((\NN*s1I+"[Y'Zz}z)hFP5snUsʒ;/e |9qKTBBRhX0d --GH:p61HhXPwE6O\.ݡ۴%oXeqGgZRm1)uRHL.$g5y6IEfբңжWs}VH.:2Cwb!pWB %OJ_btnqw{ ܯ[e.7BMsDI*P's9 Ll0&QɁkܝKwm;>g dv5Uc^Id&.4*EIZ3 7kʨ4@#\9nǐPi)]d` kZΖ|{pvWj~y%"I')r(8 (9.VZK \C ow-[/L yDO`k0vtVk;;.A%J4LZ9J}Cz)RA)!0CP3\Np"8vh5FP4V۴!9eNG`VYg3b?}Yqm: FI&DmyDpl~Q%=kDn|e[cڢ XD;ō>Z=Zm kVHIŁAK is+eǓ](![=:Az;ܯC/7GVQ@eRa'ˀ. 8*Nr{u [ I9%I;\%LԢ>JBRZ oېC!) I٬P9q7x+!xX͑*@]є$ԄVTsXNƢz Sơ*b( 4hbֆs,;tn{fJ;I A5ܨa!EvAzJ$FG*4)[A8ew8\8 ZGP6 $Ti#t$(u 0F3duRߠ^,8H ':Y+;4U9E'.D(;ƀ$;D s7櫒0z0]1|".#Dl%r)`?G]i9 }0,AZr ܁_Xc'UZ!ŃPLK Q ʟT/S;{H<bw>ǃٛ_32bo7\mq}}Vv˺%g.8G(G&r/Exgxn8+ÏA$>vᅲ~"^țӈrE[FCG;󚟭UZxo vg/KVHUίhE-6~gfcgCK"+ً )j@j,Nq7^ u2E<-j?|׌nbj߮g{uk:{AIfqm2i# ';QtPFĜR-XŹQw ճ)% iw- 2s~۹c;FBhݾ9Y ~gCXa[0-6ٻ޶r$WL7vK,Cgznb<&q2[ ZS1ro elԑr}D2DM%'J{EA@%JYSLNx$FHEpU0 -OPu$4]K>?Cɮ@koQ J1V_d+2=ܘk&ӝ,Ti|?D7|%@:ŮcI\& 􈁰 rɨ}-`4}Ȩ Ec l.EIQ*L 62&a*aak+PwXZtHӫb7l{ێ&&a:4@?۸r[EEREX1,5ɆrE3L%$+,g+ibD8{. &Rؔ ^C,ȠQV@~s02b&fa\q1&B[ڱ+jc˨jwZ,#zb6!!+1M`J~f(IX4vnydrId25112 G(irqm&ftBeE(p‡Ӷ&,%󄋓E*ClMKvyp;\\ |>K ˘%i4p¤,Ͳ}akq(xw=@؆賜qYEE\BяX2* Lz4QRe"-CZT / H`Rʣ".c"pU"-VfhjqH{pU++=T*㱮h"9xgHձÕ^|lj+^'-ȗIyh{Wp;sB(*ۀTSY\Lj]th>V?YQJ#ѿoOw9ΊFߨ]n߁|Ҡoz~[L-?ŵ$-"LB\ \iߪ,R*++i Fk5ee61x>SF0 d*MLyQMM淂4/͏wc2 ֆO]aoJJ5*1'Aqn rӫ߆_aCy"2gߋG_;1kM9@6he-lԸ߁H^Wf?t eR[_gAP:[g2)pcjԋj.b v\N%S/'C*d8rD&=e`Qed{ӈ)NŦs6Dm|V1먽 6r4} c}>R).T.HRdY)o2Ab3d `.З(|25q6SBWR) Mt*+MX2'o!z2RyL218k? 3aVMGo|] |l͢*d+d)T,`J²T&<@S#`bRJ+nXQRk> O.i5{xD)KN-ܪ)@x5I!e䲌J:sap C8FC2-eK2C2>'oǗͅ<7UEʬd~Nxl*Ij.BP,r#I$t`9JXtR˒@$$km7J\eY,Z]0Y:}99n*hhRК8gv`_FA<;yT)fܪ~EYg P'|)ONǨ*`*W%t,J_:=G2%P9A$a"pFrdn>z]9qեo=ukl=10Frԍ ddEùM*3|=v> E9 Y7YW&V@-n9(K9ʠ%B4#Dɷ}O Cϵ4m3$*qxDD"(hI[cYb‘.s2mn";vI(;ui\(uJu$Vv\8AՌlE񉼹XvD>iot]󝤜.!I<.|)d]H߿+{0grY<2ͿX{YƻIzwɲ5)d(p‘9IUg2$]ɔ@$ l5%Y9X]`,yE.0aqA[Xр<5YѢ4<9ן{7LN/[oj oj$zbvJj!B)T .= [&Di81FbYG@On_:UַdSM}%jd) +R?.CjB{eG-?J RM:۬psR?|O?Sgǟ~xvӳ?@h)l4a zuϤcFŸpw4]հ̣aşPh/yE¼ |M֚MKm6mz>[y]nhOw%e"dCf4 TNXi1Y1HO4Oks㬀@]i}1?k:.ʣS b`VE(U0V|߸Y&煖$:ͬ(h]Ql5s^$V:n2$V(AX= zwH!<鮇P^BY ף!Lj_.RqX^UNfBr5IӫշqzHk:qkcm-BsgjcI #O9: xc՝NHl.et!{g^B_6TJ}E2[reb~U>x{W +WJ'6@9L#dgf`s0@sxV>El'C$8]-ms ݰճizLG_Xq7޺#qq쬣PLmYGۤ5|-R!v+7a֬E =F S3!Gx=ڜU&kkDArFw k`XCnb~ nq<][4үfka[VzyO_ 1e3‹!$׆ gXlC$s-q]R. 3AC{As5>$`00``t)!giyA 5֕mڢCLf*z+BVR"YK6?{׺Ʊ_eK?S^qTR9rJN~(*Lh,Xyx. N͋DDB oGmtRdr$Q(7?Xo9;~*͵hk+BD g!}i4J€zϟ[w9~KSOo{OH`dbt\z>ZgEX^QET|7W~N}uC3]lu~B\r"\\@ɛ%4%P9lzM^/p>9kF%A]lT\XgDžcoDAK{lvzZ EUӄ&Tz\xKd]7])·A_osMCZ)G+.Ez_A!-Ve!2׿aDNX`{GWWo,-/|`V[hz;YߌI)eZdY%Ɠqǚ[SwdL/BX]]wYj*%KgW>3e k=* K:.QrIXeb^]0eolX? 80TqaŜee3APFivf6$]nN]ӄN,D>Aq.:p䐸>r*u "s&` 2ŭ&..:}čq6}*ImKЏM,o~=|C&vzz6IWj&ZoV+`;}`X1u?_f~(nsN?bv*^Rr.l,<#$3Ld) DT//ٳQ}$qtKBʬW$ ҹ4tye~{ffH֔yÆ%]רywo>Zarv<]=`Qiwsv>.zoQ jFcqvvm*ǥT͒Ow=ukzT)U*`BVȕkUȮ 5K$mPF%cm%WQ%CƬ <g)pIXl9fH%_z&FW70Y8k (8=Ϙ Ts ZYo[rn`!U|Ujb2ˢ\7OQ5)=nܲUT&?U%cTtJXYtB˒"$0ɘ]#(Cj_\2j<ّ,E%`P@L:D6r#bqf1"DëY`&k!@FQe%ph-M܆d)2 h1/+Y5qvԳ+dW喁ZbSȳGA{t t69AmR&<4@MxHc/z~ϻyF%ŝNlhDvׁ;aerl:Ǽa>SOAꭈlm ՠ^U"*|Ҽ;Fn1[lj,Ja-li,IRazXgE;}T5./h~FtI]vD8ȭ߸'=/:}jC}=h t3΢ツ'J|bK۷J:L'Ȏx!(CDdGtQ`f+e!ȝ Q@B'-wZhrJXopԍ찈,#Ã{KH1xrY"I \r!C2x l&Tfٱtӭ}@AKk)nũM,-栜)tJe< EjKm1P$cu^܈uv>k^>RGT@Jb7Xˮ|5Z`v.9R ?qZmqHƁl J6ly4&:O"3YZw.b5䶹~s)2W^C]s^іr jyU@0V!2ȐJrMe(!rLu ,E4 I2{MFNFa&bdIU"L|ʧJR0K}H&X/yLNi7-èy}@ZvW}W5Phx;j?CJ!H8LAryd<\)+A"x sIbv&yM)`+MFy^r.^Kڹ?`sQ&I0)A͇6gt]0?Ќ<}4Mܡ }涮t+kMջov7 k,RD!>mmN\9٢ܶtH+^qg'yo.?8g.qHM?< QDJ_A$OIsd@KQ+Nz_42wv"Tt`t =?3w^9?0j%-3KèT EM")˽Q|Vk7ș[W :tE?껈GC]7c)|vN_ۨo$ϧ3>6 LUN52FzF:)Nݮ2EZ'OU-OH`)jGbhrN#; .\v "V%\>9G@Z< ԞsQDDZ uHƱ`c1(T PH-[#gְJk,-BUdNe=qgj'o^l7'/\bkMIEHpֈԤ͐ 5Q)B)HJkݮ>NCVl`;8b+pJpQF%zv2Y/ݠT}ڭm6,H`WxC>jXA|Mi jLEݻ$ށ5dDT y5ʚ 6!pDX%ey9a."ukc[hZH"Wxʪ W.;DLP kaz4IKVQXc]B$M%DiqSFR`)UTTޣ&2j*'z@AFj8\^uL6 V2n+-Oӕ\Aȉ8*/2E\L6\0(3Y)pM|uɕeKueFȾs{gbťq.+2>;Ԍ}^Ƕ}._"Xn_ j⹎Vt]}=NٞGHU6E{"`1 ~)YTyEdQ)YTJEdы>Vk,uK]R.uK]R&DQ)}6!3J=␙ZP^2SfCjJ(!Ϧ/_Iy:Im((DBER U]?qH,D&"I!gqHOȄzVX/ᨯPiWZ<~h22.\H/f#k5,'XI4&xT[yrVp,)s͞l>#JE5 {m$D)"'2Z0O + Hzf/~c4d@obSN |=3@'TJwPt8S\yie : i J(r*R*ቩt!MT/+z 9,@\`V BL%54rgy:%*E!TIn hBY"q/q-n-?!N.eaZBf~^x/Ho:m(o@'zj.[z?>$$UCy~ -JY*Al䡢X⮪J*ՕNs)l5c;J_-߭-]ZAidH*(g}p(vsc-P"aHp:^BZ =B ]ݐt*\͗+UUvy??=N%V|n]Q SALAq7:A?L&2l a2q_y3 zY˄s]1ѹ?.W9ٹy6IUn0mb`|KYzG]Uj֬f#7^ʕ=}K-N֬p45zlkE]7Z?Z̐Ӯ_;_uQu}`#_ 󄅘TLb+ WC̐DdVC#ڏT |T3a'*AmTU4A1jis!]UF>19m$|nau&'O;z}gnMou*{19k(+`=b eKD#46D"6Uy'wU}4ڙvK(zCerQPF.,D#k% RTkƤW"S%O.99^x4Lg_ż~:ESPa^QLOdzEUe]:hZA#|N_[ V +BWHl H*XdV5GY4ǽJF&<$yM:(eq95J*p32&"UܩslS1 .x뜡BD Dy &vL;[#gCqZh/[n31)y?&r%i'jh3Ztn3 )cqvooW۫4:g&ճY7oVPͮ;-_i Z^xs~fOWuǼɞGMc4<]E;v^uV,mgJ714qcgCeݟats m_7WEԂoZ5y@TQ W,(x26wc+[ҷC=)-j3Ntg<ˍK;%L @2@I !Zid$_RǕ !PѶ&MЪ(&SHB x,\F"F m8o M }7V|}`ZLIWx>99Ų ԐXͪGKFy\'>|pV0JeDb< $E-pJ0zf !$:~0/^xKf[$gZEP,l@ ހT#GDPwzC=uWnM! O%)Exv3Gj<{V0 Y?r%u[?l'㫾/M" FAB @'|8:l*8EǦT+)vɪY+#{ rYƊ$|g<;}:Wd=AsqKFazE ?ƺqwg?poNN289wo;\P3'4~.q~xuu<8VY>\3TW4|0 )weŃ'ͧw6Z57뚃ؤk樧AM߿$!b; .C[^CEc,9nc AJJpv8W;'.ƾK1<Q@'F׆}X?_NV1ց&JF딍\*:0c+M(umIªf?B~iNr\$dg(/pyySym# & 'H˓)P\BbZHk{alhAYs 5HBl!WQ{a5_auk XP#ۯXͪ^GWMSo?r(UT `⠈(J<([PV3hi?(A2dc#JG'8d$ބgZwH Mu(Q>$6;Zq Vr"&P}1rpJi' Fs妴O n |~`87]0y7,? Z+ Cw };-BXDH-G\ XiLXS;k6' {XKmRC Gz6q&ZQ0sVp@td =!"=AZRRA8)bi1g;JaظuFZwڑҒ~%6KZj&O NQB_;fP-Y+l}+C ZYDZő ka!i3a=)9h@W<(b:?iU=4x8{BL 40,}1SG E&E! Xy:<\7ntex0r8g*˙F4h&U$T :emuZj1*xÚ8伕w&{6GB_ i"][E]{vC{3 bDvd/uē]nn0߹;PD2NSZ*cb\{eИ DBzd]:`bHeZA#%Ȉ6z|J@( F,"("M: o Y`FgutL_ ꘓ׉' '32 (*ཙ PǓ RL DU%L4f\Ty.=2H(+-n-NN6g/O<Ӭ!]Uj[vfp|2=NXCyЃaxzf lɓbabbRR̳ٻljHLyQU.π]"$è60!B3xVqgdRT^WQi:Gp?4 ~L2W6}iL=%y` #7J)ʑ$70A|Z2==+ާn,O~Ւ_K`JȢqbB|8Og>|qB!NQ ;3cWfzb@zʔx~bҲI__h>Xªl||]R.}uE2.eT-_~m2!!&ϵ\RYOVOuUUS9ͳA&qBJ J ĔKZBAW/A[Gy.:EռLRPtu*lNUӓyfOLep7lfa3|Sɯ"# h\9K}:ب'vWc*וxx{vcjq5#Ow\hg#mÉTj[&hAZ4̈́n1^M2K1%j5t{H'"8]zR\vtƳIWq:"e$RjY$QH-URa(A fCLbZ32"O:9Y}:zGxK"iiL7(bA& se:k(oIG{׉KiW\ƝSڢnv\񂣔. ԋ@"a_m@O3>`a. `sO2 "!*oA(`@& Vgz?.TW|D#,UQac5*hjî)^J|HAgQͤG'c^uI͝v`h֘S+ʪHcL/[$ցOHGX K[w'"@[֖bI`g"|]w,|s= ǁEdg eqcQbEI-Qʅ攓CXc+&E::㐻JX:c<=ՙQ!x$/YDuJC"4 }}(.%5tMO\}1N&9;R`̓7?CUA[a^)+x:W!I$sCX $=#M=pzv]dK!sv2))K)n{j~Ӳw: "w>ꙻi& 盭`Kbl,zx Ve7Bۏa$١\Bv.6JqZiWe *aVUds@Xt@ uDQF_ F'8 -Ɣs1&éƘƘ%j߫1&*1>j!)ᨂ*=ŧ3c`x>0QYIGfY1\yaLR+>Ζ)W0is>iS}y~& ˅!g =(HuvsdupaR B+FXaf%hU@e@Z/$&0pTKf,E.3Uy*>@_Yk6^U"mGb5ôV(:/gaӲordVdog7Ue) ޼~$ja6G8wxqNpF dX 9_~Ɗt"X+/,\M'j5w5 T z54#QM}VB\ʫ9EǸO;qz ~qi`-O$YuJЃVF <)qo Qy3QswF[jͯϟꛊbҹVpb#Vur鸘͇nvu(O1t2[q.˶`^W5Qs-9˝R 3|;eޙ8{mñcw̱kk`Sj:RjsfSys~(E2mJ&uKu!o/~qiJ ? [{zmIpufSHqQkRW V$T D% c֍^r`y{rR Ð.?7smg+TT ܯ‚ mm_VTo=w>H=+v]0yCֹWX;KM,Oƽ#@jϼ1h)&+ifKB! ˏE.Dr S+"֕&W(WmCN [ !hJEƬ䊀ǀC`H#d-T[J)9tm3Zj;Zn ɾQ{+$bR?`\]~k P̯{CΗmwϨxO]Ug|>.]ZR7 ~-ZB|RtxwXA?JNo0=U#B`"RSFDD b FрG!e ޭ`5,S+{ ̘f8_Al[gpyN+wãhvDVĻVj>>cXԹwQG-;匵\ cp,̒ ,?k:Z\bjqILj2$&1Fh$^}FAL%Q!t:9$+6^uLc1>t6L'(&8񊃚 cȹmlI1ϫ{<ϾcdM .9Q2bب>歲8܃7K:qED0ʃu+$6!`JUc*5(K)µsz >z<̶+r6!D9OFê+'?O_(dpϽ4d1.g6AA0x^6'fܔْv53(E{2/Qbr"!!ګaZ٢ZrLT qVөT3.*ٿf:vu=M7W~*)k6ONÏ*pQ|sOŴM`>|\|%@9)T%C)1WĝA!x1. '&%0jF&ǘg%FuoDH z]38Z:LJm9[UZӅqƮд u  M 2np{n&70L֠r?~ېnBɢ"ȢV!\l,Wd: /!Y`9[iL$ ,{. &R) Xn3(s 9lYcFۏq<w쪵e&YFGz_1 sq9o7Je30Ͱݳ۬q@2!C$tA&C&&F) MIGLrȨNƹak<k~qYhcWh[ֈӈF@ƪ2h_U(F'rAhi1itg- mU#Zid'۬ b@g|!餹 C KZ5B)?ղFl-Y.jOzqRͅ|qɎzv"g^&G%Xp^;c| B1˒4p¤,ͲӋЋ[Sч|7}xd.=Qo8?[+' A0:q . q]V7;r]>W䡒r鵌]MwMLNy=,ډ{hM;O୍a1@ȨU* ťY1_oiAuOGhu؛RS!zwւwW?{7]?kPfr,eqKZAmUEmi)=ZdL:_M кqh㒉Q+( b1_yiȺ-7A} >y}h L'mLbl :zm0Pk5wH^vZBJ傴.E&k3p!k s! %M/F㮹Zn8&ęW[+O ųɏwvg&A'>>LFp0BGscVo[4r]%l啷Ubof:482ֳ53frIQS*DtYb"Ι+2"XƵ֢9vnvL9305רkޓPi52Sl͗mիA'zVhUNRNޕ2,rb>(*|QnrIƊ3 Gdt:EvꔝLb:]$RC|.wPU:jBvKZr4} cU)%ekGOGq‛4&UI|rp1|FTO8 6G[ϐCA̧Ű}.\3Oay{zhXoa͢*9`+d)T,`J²T ws,g>A&X\Qiō+JNv %MXk^.2(eI[QE@PcrYFc%Z8ɡL#ӁsٖgkloL>||}~5pdyRKp[;]XŬJ:c[iPԥւb]LZ$\FxQzŢZ!Yۮl:Yε$a*8t6 sr!TґPھ9KZx4"zgV:q6RT]YOdY  >/[NǨ*`*"W%ktTdRZOr$(E " sM 6['Y.͉.{9?$9> @ot=߅ﺽDn$m2 oS]rm'gD `Prj]Qzu5k{ZNm6[2߼uԲdGwiy|uyyf~(_s:sކʒ{GW/!gks_-JϦI-b|HjGI]k,hmezv{Z7,J/{_̟Rr/X,ԛ~gz=Nf {2^ Q{; 97nF%3qonꥵJ 8PCqE9$Y% l;YRCfRr8ޑ݈}:Hڽ09Ez42"[ <Jza  OV*flBMϴRPpE G;,=J:&!+GcyŶӪFΖ%St_w Y~hk#zǡ_ AדB]/3 gcF'a%2I- rpт  $"9*`@?~$TyMڭGA'wu/ @ [rN"D{2hd 633}}$Ir3 kkF$*q6xDD"(hK[cYbSɒz!6M'H .[{wr\/q>/Ͽ\0q+]GMFk_UiH.{Ҩʅ3k竇.œՕIbM3L#O2xջgLw8#ŗI]]7FKe8sI􃛳ɔYjl5KrX.2f0'%^GV7 ᴐϹ~\G.saz_,M7+W+*~i_^8^^~TI-U(*gaV4Sg̒dk+.Jղ[(UsW;s͚Sv}7Ӌ/Wb8(n}bޮ튣XmNZ ;HïvY#Z?e0yì2˯$1 G,XvppXt5*Qli~VdR]BF篣q(]_OH^9;g7cS` j:g{uA~w/߼}~}~u@+0MdZ?/`.I-~X*zfRta? \͢H}X/o+9^\ ̀rOzaEH{a!<_ЪiaM/jqIOrpk!vM~uxɪF&>-#}VT5ʹYQuE.%`.릃?fvyt*aY ̪ Ɗ?5:nrhtr:ͬ-qΣuGyTZe[jBBm 4auFݱ# MXբ=$G;#KsrC(/!,d Aף!Ll_}rX^UNfBr5I뫓pOTяdڬo\Ex@1 ޤK` ؅>MxUP)nCԛin! ++ ]5(@]JʽRñ $0RMF- 2&K S T;灚g?zs"E䞘q҃'tQq&r*ΑK=4܋Jq?LOs+e{I?ʣci5>A߹;K) Tՠ dQE}EH+kTQ|@QHӨ!]I/^?_@ ߉ϬoVA'Tȑ!p%9Xnl'sX6I*T)ybxe!' Y0caK|2ZmX39EK&-2Xτ h2!P3˱}tJFq!on6o0h_m$JAO;]RFޓw31Wj-ngMdU"EQTId$"/q2+؜ѧ~= l+r+>dσOJN{)e VbH> \ VyNA-½Xc %yJ2#$bOHsz\ls?u?F&]cs'{f3uZk}ء~~>A??}BmLF՜:6hX_>"t}'^9iN^Q,j^?'ۓRP)HbOp|.y4l`8dD?Լ|ꏪJzQx8Y~8}~=H'>./%ARPR?18՟5ϰ#0?q}89?ŋ7z4)*%գZk&H4BL({^|\MOz9c~Z u˥zի~Σ5B1U"dt89<.2ӿ?̹~׊c?0mus3 5j0zUnU$OuV4q9}[ U|b՝]uI3q -^Ueoi2&7pr qcc}̩GShS  6bduF:EɞM^ snIBʹFb-1ՙZmuʈA 9.NQ2oy MT5Ԑ%u\=6},w5kSwgM]Ypmr. a ǕRfNx$ C>jl-Ɵ`ﭔ\ɕVLKK8=mq kl[=^l%KXgHqM^QIvB NuЉYQ)5R|^ Dp7t41UΊm r'ZZn Yv&&'* u>.J!ɢ_i_mA?4 ߾,w2A>^Ĉ:xSJ懜o*,$T\&]] !  Bvzm2hGʧv,|JDWcƶS-Ker zjC$DbIAL.$g5B6IE )/RCAL0ԑh\b(U4r!\KS]s36tNRȣ&*dx$;w蕿\]QvJډT"9@\eITr| "N]ku{=;b{D`B!RU:Fʃdy%iT$:3K:kʨ4p"G\9:n2$9 TZJug^&Quk&j>w#wO)jh8 `9.VZK.zA t_vi;Po`yClZA'\'1!\ӘtDZtB {ӳ:iNwWC D+3ih :~הJ!bCP"8#pñSh͉g,#ܴ2Fi^03a3ҡ_v+qM$ ׶vDPq,s3dW)򉎕oDzȈ847DxֲqՖwsa ER8a$%Ƅ4 P=`ŮhGg3d[k7A'ٱz{v^)"S;0=]{Rq2 H~y8ݻOo2bkNv"nREL}!RֳS_g}*bWGwU:"Dx.UP^xu'ϻ-^[dȶaN2oWx]U+5Wޜf j1.=`R>x7TALXKS8 sQGAyCcu\ei%us\{s+ g ,O\eqbv\e)؛gcC 6խe?=d;f-̕ޛ^i h]@e@-aҺ!R+(aG`Ef+uy :f"?Awbs˺g M04,)R28tpd4(,͒|o&m)뙱1$SI8ē\xJ<80Zks S%5vg} MѭMZ&'@(unD@3X<{(t}  hɹ&ї!pV!" ;ArV((7j%5D$@H`!hƢ^wrX(*.v!ĂʋFI-jA-mm"cؙ8H)r4~x1$BϽ>bmv4nU]e·xpC=o2$PJ乢`C42o;8x\i2i.|4ʙЩ{"9e'4Ƃ[V[\:E JQ*d*y!ĹYWgpί=Y-kjQf]_Y>p]zyyɄ<..ka$TV8+^.jkrCyw/(:y|)Q`zXCN01{vSAB;[Qׁ)2|J_?wqIf J*Q`W d ,bίBGh*4A+b9@#@cH!l5&"n3.s\'vL>s[ p&Q X `afp2sv;gsmjy0_6 Lbsz/P꼞٨dit} 硞-i\fao䤩 Pu7z8Vu2ͦBmS]Ud\g:rb[E:(8^QH4HǵS-sTR T+ʉ(S=x8EhScIJ%$biBQGβ$õ/ֳ*s}Cj! DCB Zk*< I eH$"rkh Wn6Y:K;=~u~[Gv)Zh.Po0Ŏq5iU|4&)>Ku.a6>~~T)~MTEif5xfďvݏ/g̓lJf+84cjl6F]̫/h98z$XkK o鼭ތf֙ bqp(fK> /zjsp>o&2E'Zm+ϚAF\HnX OH3_;˕ajKy纲~] _?}_~Uɯ^PO^| G`};q}P$Zi RU=Qr0c|~ږϧ+u˕AYF`pϗnvm`8|\C `nEgMc{]V]ڕ.j']*C W>$cu2-#·\lN J*pq϶w:𓶃8V^[uHĈ@+>oݬYR}JNc:mltl$NȅXD- ;6xH wGD}!L8$)ɀg/pz$+2.q˓:9ګLŝT|GUJ5z +bQLG>MLD?hASwRhW-C/8.n+\eXKTâtQIETfDŽ -$d$VGF3:${CQ"SDx´HZXz|CFJحbZċM>y4")vTH$g)[ami`*J2KS;N9_Q T&JOcr Os%vt2baI\Xd%pJ^kk~4tpj?'|?>=3YSگOh},>:fa\j^?*=n y< iJZXZF OL3CʜTO*w&c=t1,%(at\KM$b*|!; D锨ăCx =&c&=վUn.}iwσ|Bf~_^z/HuA=#>fՄdaw"q^XJ\ @i#%%B p՟Ueg&8b/,-Ի)-K"$>8T2꿵j% SFh}NrW./i}T{*dO-N 2ݷ?aB! u<7G1b2TiZCXI]5GJw#9"MEKZQBb(ЃO4g YyPAOcyi,YP0Sz#{,II$Njm$Ԡ}ϻHY}vz!ÁkI</mT E}ZkEKkMkDwTdVb|@dJ&K@:E#hk;6'Šz(ȑcYAzHp(c˄4;B;Ha*&* qC 40b . 5A) 'gR 8,Yw9;\rz 9l ?-8Vru9V;E9QN޴g㦃pIYS4yAMB*CHbf8FOo#PH;w$IFЫVkyp߻u*G;r# yfM[3I<'1anU6|hQƨٵj"T$t@<-br6/|5IW٤JӜ#q6R@o)8Ok5F8VUeBiƨmX_U/1?k?Ŀڡ5@eWw( {rRJc9с1kˬ{e(Wkp5xz^ِeTT),TueW$}mb%ww{E<4')S^WuOS Ih0zl,/^;nr&uظm 1IJ7-Ҝ PSf^Ɵ2,vަƬf5YUEޣ[Z 7 VU|0t_cb?@3rU톧.azM"R{%w\+My% $"*mev;9yz_tC (λr<(MHI%w8<`"Y+pDy+N^@N˽ʿo*[qY߭xt ^Ws =ƕR@!7trF}57ޖw]D۠[)M4u%(!K.%}AɞG0mV#vGQ/nEK'\ * @룒 \g]&Pw:q ::&YhH]Ou41"XXy+`$S3rvU'OPm?W򨯽޾M;۶}2{Y3h5Ae 1=^} ʅQsB䇲Nn9$,DKIDW V|#ȼHڻ`il"[soS-K! rQ ziC8$DIј\Hj$}Qw{(J胴9]QrRGΣqQ;CaLR8p٧neg܌(8& pЁ/Ѧ,˵׾cV-,^Lu6kϯnzq^+FqYNqǢDI*QT\7>l0&1 SL/IL{9;DC ZFp (+N^urEC]-(|3L*S(t nI+Q$ $]n6$%8#ˎd+lE,iDnWŪ *GiўeYl7ՋخƩ+'|DdSSoDzD~-Bfaz]2%dВMvl!_$̊}%ep?)v*ӶI5Ӧ$WGrɫ䇫LSqJ4`ȭoN'o/=mq~7OF;{(^'HKҋ~dIYZHT_o g}>c4Yr@A Kd|rգsiқ %tm&mw'YHB]J\J0@zC^JtLB$HѲCP&W]4e4P\dde!a*}aSe`]&1`[Nt% ܄BVsEtH)>p.pn)%0!= uw.["=D%Uz FW6'sG2I*{yJCde!II`$Fk6#P,5 ]kbߗXV{ M{{; LYLw4vӜ CU{'ܹvTl)j&7zR(OV>rdBFuexȁ|.%TJo_}dASW!j؊ssmanz}s:73 ]{1Zd luR9DR$%kHj36孢3).#e˗<__ùNƷͿٕ¿)qQGřFy*~d|Wuyy T?R g?NJ=`*]XK(%xC%C4Ӡ}_dp. 3z2~߾|-ʏήe͓ބbzMo7!XxR\塞ao͗:fړD!a4zznpg`43npA,1TKdOHKy#) JB7 菩hSI ovSZ9~1䘍OVIv,zPچK6z ̀/ rqD쉊ɗ/Nؖ򰤁7M-egm-=|NmJUձO4=14SqyG[!LpСHbrQl1X(I{,sw=X{c9`g "ZÜ4I9>dGNZJ(9pj)=MJ.%@tOR%O\did\r.{g҅rDp}dQc@/G~|?&;@k5c*F]6lpAa(8:rS*%cu53V0=עXTFI tѧ<$$FHEH*zcʊZm8kԳn|~U2FISZMN6{3"W'N_1%]~}֒L^e:+L䣴/Dh=N=ʏo$@9)T%r3(#$w&%4$U-Fc3T}Rw UzAdĜuȹHնՆFIB5RVMml u['_0K_^dbGnz{$@Ϊ2h_(l(}< uVI8kI}&Ȥ}Xi'۬ ŀBIs#KEEQP!jNWjOvq͔kbդdCy]8]mv.KJVA 0;V +0!X,'hIY e۰bIǾC=܂ [g[AZar5r{ Ϭ iEF\KQb1j "#ד4[QJn䪄HcK TOI=;ꖚ.KfH(ul tETdLHSr09cYif"T:; TwK#+;|=Xs/kػБ={aPYso0osVOf54]@Pc"o!+A̅He\fvg[_=ҵ #aZNf]*+7ٽxszLx>ee]쭅J񎽮R°/O뿏..*=ghK'뇝𛐿w׃ߑ?n5r[Ոl>[˛nv=wG'3>L?祹y7&-M{EK(P.m%& Z /TIV9oH}$DoXC$.{.AwqjpŇy F搘K;&2\~0Le;cԍ3dbV o>?/_|o˗3ᬽދGk/1TΖ{Y٠3߸KGgRr܃%d-k¼xBF`1] tOVx=~J,mp^J)Je_S/F;5dkdsNᝅ;4dnx =|Xr)jh}>nzvv6-Ro+ɃRATY@T \AiWN,x ӛwQ#^Hh-53|\2N PyStWɛӋ ;ßnFfNy؟_uѮ_9r]t9 f|?x\//>ZW˯o׹WVF$ۈdl#mDHщ5"F$46F$ۈdl#mֈdg 1H6"F$ۈdl#mDH6"F$ 7R 5[_ckl}5:X7[_ckl}5f[_ckl}Y[_ckv5[_ckl}Z-5Ukp-Y޼pGSRTA#$+C 1x fœxJ,cދ(`{égYf:F>,;ܐ%;F9VwQv4yV{͍yJX\`sw6tFQ@q.A3B fn2kȐ8ͽNu 3BJ傴.E&k3p!k s! v[gՆ8Jm8V>҃MH2/.ѼʙnB08|Fݿ6#dDSLHwʥ9=yi51/=82뤔aە4c! h&zhJVD{C@&>e61x>)xkfe 2h&&Ls85W^Zt<@lw JkKr+o bdKx3m+XWg()HSFkd0:JۋgY5GgU&p$y bbJ 6/.%f:Y￙ S&bʺ6S,Ltf} ~&tWc? X,!ŨF < kr 3Rv)7&v'Yz)_Oq}:^.1{Uu^xGKodx%b\Q!JI[w<7$X&YnLQtۥig@cFƭFxI.ں>gKf']w )x l2Y r죥zUmgMQi+zY K;k;Fpq̡;P2;  Ř&U)^8 V J*`W  @}a%zG;%T$i4Ts1)C8e5&"کb1J&i01љ :g4JRІe'c33ptt5Ue=kG8]֣wO92jnhvf5!,qd`:3>yc1}U-AΣ};_!f[&n\-k#njrn]˼t5Ot,ޕ\Fs;͗iڻND8^rJ,ӎdzct={D;!zHuzʄG?hw.=܅Z|Yr1pTKU< "%9"h N'F5Qmor"gBSqƒi㹳jǘ2D*B2!wb =] +HqdKUDo.v?< پ??}ʦ:ny-Wf(|WSBLS z3b,(Q$KF'IQ v mx-4* .Ū_5_ILOT)rdFdi'mF+-BLd P\ޞLqݕ+B}DZZvOzX rƒ" ,G-5X& Ir&ڶNj?hǾMBH@5O KP F gY.Z'n ē+5.j\ܹ(R~~q!HYEC(pjW UJWqQLVq$1YDNR|ua>>~9>y{n\O*SB x"lx1_d@1Sa8ՔR1`/ȔͳO-I1+yzB>r(˥+p{oNcf:mltr.:i#R$󨥡Km7pIo=wd ˊ}'@KsqrK(xO{_>#?x}GF(7:8,Ky\\Nr뮎pcA٨qv֘]4 ~ςH: xkյMH-:`oA$PdV,ku.Amon(JiJ\ Y`uɴ$Rs@IrIQВubu/XO"2-(S&/P$әD7 @'f9GM2R\Fkja@3ÃF&\` hĒH$59.gmd zz}EWUWBqm@I)ڲ\Ȏ.x&f)ASgB&PUKj.$&  *+ϢD;\ "e.!g))9PanO *o8i:[0?3NrWvdsձCg0vZ}K<#Mf_a&bAUa~\#U>Dz|WU;`9پwɏgMV1VVX\gll84GT r"iQչP$O'[Oƹ/*2/毇jz^}:,&~2+o ?ů>tqTaN 䈂:Glpi~soEo]`RoKBG8(./xU=eJQZ}5$H!&<ѠrV!N&9wȬ[?T3NodϭPn$"Y%<-\7rY|[Wz8UY=B:p w|~ck/2Y?\_#u ͗ ʸx`ղeJ[nZu &ŋ$SfYT]͒g!&{9ZBA.!?H+HV>~RT7qj BZQcRƜ/4G߷@j|kfȐ.>. BF%Qvɋ* qhMK;hwȝiNȲ]"?! /旕N.Gid$jYe(],t˛(+t\LRdSo-luD*@+@'΋Z0)V\ YNo9eH B"m^"Oo>pI E?&VggUwyXgW?n;ckTP-r h}.jkW ;IʀΔ< #Eih ^PH{̚b : @k~y}%CV]TSPYmcE<ʯENC@RW ]-By9^w=LˌAh|t:&J%Lg7Twq |d6O-W3=&1a?mfk^IB+hG Swox3]+^4'P}<-x]8ݡ$"Hp*e<Ѥ]biE ?(]JDp2iNndzj>ӎ/:! a[SD9UI Y A83)HJ'QL1ЧWA] 7UtjË=Grz󪏸 5C)W۸Y/Jnךu^r 0d Gky6ҘJ+VS1^|m'IYBp@?.'V_a > }yώz xi*f0pv>oF/K8;a WߗsNKnov&I/&id[@k!Rѳn5ѳ?:ޛs]JЅCx}g" 4Pmf:p򰙠'pw?O\A}jyU$[!7n=yЙbҔedqF'\&q$Io 5riEF+\J#.yU,}I0ybI1ט8 ii:@(EMfBʴpI Pg>&~Iэ2h9-TɔQ*DX2%"KLRr)Tޚ\T FPhgDɱjoAΆ> MM!-yp~Ig-QXpPS1A.bYtRzw'M9 Ԅ.CPSLD浅9d9DaKd҃Rw!/{z(֫N0ڒccHdJ3GHxyHVX*uFrEE,#b,A_. mAӋG:l _«C@ZIfUeLX >T_ݹ؜/V|ѽdK'0P~6 !q>ϣQ̊p)iZDH堅*GuF FT#0t)(.C*T+\qɉ`JweXm8n4{('UY]Xue>ŗh ]MK^g5).r^.rvJg% }Jz_[#7ʭOaFy=nw~|$GJޏ_W/}%VDPBWW5!1ksle2ְp88BÞ;Bs=rptdP6x:#GT`&s tA1Vw!6PV`Dr %&sJ#`g*#jr|l%FNh>,UX-S_Q|8z왔jQ VŐ1; eA"F YƤAjK:ApUfkdZmg}꘳ SRztu3rஏ]-"f'o}ќW`FjJ  ΪelaFUPg]FǧTufyw6~\_vbRsd|z4oh󓜗(^GH<<8^*Ӎi˭8wĎ-vr@LCL12e']t i,SIg\jbKy%hdjR­ H; vgәuISL{gk$djYDveJ)C3)esRfYQlZo S۠zYe4^﹞ GGB2y.՗bp.Ib&>@eG-g'."wErzM!/BRבy/1+K;ZvPMs4ɝݿ\wת2?W@b !\bXj+==6Z.HHK8agHkZj\$l RdWXh4y~YPC%6tYvJ:T`$oRҭe[phK6!`aKe9D'3s@e^q*E Xɝ&,W\eP$zρ5VOSm=l⛮'욮']s3ȕC 9v+_K0:aߘ [~C[j=iWwOy9Zj&sqVf!l4;ݮOzib%.pAs-WC!]s̛Fio5O˝j9YԷC?MMbfOc{CrW1w_s;pI>+k_{_0|[;q^sQ!,P<;zAW3M]llb@J};}7-groFF8&FfoiF2upfiO0WӅBuC{}g" 4͖?IMSuwnN]g&N:OtreBq5377=DM&P;+9nHlW[fq5$.# jk!TkGBo Ӡ}Ox0<Ht0<'ockeNyyZ$Xhmϗg_ Ӧ7$κy{6, qY|lDme_MVQ@.B[RN9:"Vh оt_5GyկCpX:~X2n! l/tvWG>fueV7aa.I76Mr_A^FNA AN~hb^tRHnK 93q̾T;IuRIT rӵhbfgH}qb/@t\76g͛gXsLIśq&n2~hސ8j^QZ6Q)sz5H˨GO4/U׎n[dfEZ.mXhX,fNfo-?frn:eO|)F2lp6S=?EMkm~H;e{*E iPyh{$aҢt ۞;GW⊡d$Lʕ^'y+|wwN-ǷT}BJWI Bwn4z|,wrZSQ7"-eVe6(Xq A@Y SIŽCnlFLjZ,Iei bY,Km$Y.LHK TΆG39ߛ}Gznw,̏øgz{_ڬdɒrGMMܚ+IH+\q,KQʬ 8I9$sNfԩ㬦R&Ea k,.@f!l+28˹%@DX]nG~oP}>8MۛD(1Hl9>m(JP5(y XNU_Wu|%n~u/:w[}, $ +&tzEY7g`2A1i G=Kn 5$[rt ?!I$ɞ*>J0>Lg0߳#Rѽ-uOW׻G;ʮ9)#Xe8)HX UQD J1u=33;ܛzaY]  ɊN[lBvlrIVw aBE') B02>d9x>gcSI5K0/ߪ $:<<?ʩ DsE O/9FɺҠY|>Ip{i>94 a(2tVO.kq\KiEl3 E.\q?9-˹]8(Qx>+^|ra!Ry"%Wt inFf (QQ0iN>>=l֟BzM6gAI).: >ߍ&.U}> 9EfFe5K2DJĻ|?9~_o^|uWo W0q1 ws+FFQˡ?u)Q:j.lt }?Ii*&;O/Oˊ0z,5W?n4oihn*IӲQWA9v.च|v6v@ӹpq7yFbhcZwӼn%s*Řa@4a[/[㦋Y2m;Cz v =F(w>e KAj|sWL#,8ZI}ggvY➀|ռ'];k/:5t3=ϯ<KE+o3cHq޺ӃbAR مjYiȅ6gXsa+Dew+e? A ;f\ i#Qxa`W^7Fa%YM:liO|€ާUj<]s:/}_\^L鰣tS ֥l|t&hGC[cNliS/h@j%(Guxq3~-bK|xQ@ύ78`48 by_h><ϾCj4xlv9^kE:$2 ,'2eg4g*#![,egHeEbSڧq@mWK.,xjPۨ6)KhE-YD#[M&,Rg2D'Hkp5^JJ8('%Cg#XZYh6kֺ+YV#As̴`ɻ8P欵=JIA ja<Ïy⸚gA>Ġu[HF9fS˘򑧷D6 ;ixF٩z:yjz: (=`FQ)" CV' ܟ\ FkU3hLHK H:b 5aMrN[==GJ_(UӢAg7?01tSneioɖ6@<)pUh xOG}=q;kD)S-bgD1.=2 ?hL"!`o=.mw0~{߆uY!htB.GyBw HKBkWu?!/&U lƳӯHӱaJ-8Va:oz-Q]5,I[U,e"(2ќfaJ e#qʈ?^",;(%f<OsCMl0M33Qd#bd]и䘟|>9j2?BѫYN0HJ9^F`fkzF~Ec&&~}:<3EN{ÿ9ь߫ihrr?oq->(.DHBX迅~W?W$0Eb'x HqU=E/!yퟜ$h: =tv8ȷBnҦME%ed4 &1|<XtKDZlJG1,{:d(MJZ7,~,d_ޖ1 7~?GaP_O$)}\QEv|~ً_N,pxn0&9vu.Ko'Hn,E5<GkhV/}ߥZG A`;OA/?/ yV.\ejYU2Kha9P/w\~iv*n-;,8mvt9۞T3W`xBݕQDNGđH#`BDe#TIi/R0*LFH7nݩL+="O/?IK$q#L0*ʈujKÁi'Q -v % t$߫?Vu?h.{gᚣϓ:"{PUiR鳳Ѱ)h]A/VV.HK.mÅyrǥ.huؚ^oQy"+Ft]VÕ/I4-?Z;;_|pxBN9DLj\&.:쁢;n^I}^fspJLDkl}d7v:#O] GIK[kFe; !! QQj I!ZQkiM.eAJ'0!ZA۠FH"I`ci9`t>LӎEZwn(99G^چCcZѾaO_6^o^.Լ)`:jh~|& vuUoҦ8h 1eQH-`mB8Mk`aiuiehhM;=;9L"lEy,UB`N .4DHAV̠5DhAVLZRDF4QAJظ!b>B-YkP YAMKat; k #DJc ^Pm}uz~`DŽL V+1$*҈)0R, lWZjrA'u~\yBh8*s P_6HL1BJD4qFL8Z"~hM&"~bm?뻓 |6B߭Z*rS#jn`2/¾l[>BZ a%;8$ ?bbaw xeCS"D 6f}ި>ؒX< >Xodųmvytz\&d.g'|#;0=\CpTBxҳIM16<ܢ{Ђ’E VsyCgxvqrI"|d>|d{?h{tt $hfCo62M#6qd}7ܴ+௭,·'C O <_=__8`>^>-cW_#H6z1ei}hiJ~bo}@fݚzvq|sÇa9} Wfه+ބ+XԳW+WpHgB*kouPh?"X}ߏYgti4*mtF#"7P(xgW B\=CQ"-/ HZ9j#Za>ڭ\Y*(#FpU!կYɅǶE4o`}qgf*?`oC٪5+\q3R׭5YulCʮQ;Ϩ*a[+oed>OA@x׼^-ݳo?_kA4C +ɻ1rxJ燯,ؒ2V6FdeUY"7BȤ axz>QDxv}Xa^.ђ_뿶؜\I!FBv/rLVv&leVi#=  wA5rFEKtLEkZe}ʹRt/K?0*FZ)/k51HqSZ[KCU![%k`DiL ͨw{u6;7jgƨ}N'}inVn›m0nBIن6iRLj9 pT|6Ʈfhnf(:#M1E{) w#Lţ%D5VG,C똕`{!lSA6JÛTw̹caf]| ޅOʘUV{mv ur>:x! @QDK:.NRi7YRXG^%cNօM,iB<sqsIH/ZƩJj[+VaC*)IV:ѝCQ椄,Gp[c[%c2(mGZdjڤТ-)$qu$~JaE1kiu* RJTm1&!Fuu $$5(ڽXRr,a3m>%TX ]+y r0]BZx _sD[vRhň޲N+}t3G Ae#kvpqAkڏl: D ٕ P~ȃR!*q@ơ"2ΪU%o0 ʰ"  e#@ A16%VmZbAn7+itҞEw64IxTF+Ѐʬ$޲[6Vӥ[oU4ĽQEɠ i ߤn$BG lӨ{tВ0LЍ![[`M.1ڹ;6"|Ie'"I'%\IbLՃ J`Q Q%fdiOdPJ;HqUߴYaKP!>Ǯ;lE^QH"NdviP'7 h!g0"}mP:w>zߧz>xtqyv]vNк/휞4^!Z|q~/͇\#ܗg _?/* \=R+~/*>_p}8{\=A0pW \1pW \1pW \1pW \1pW \1pW \1pW \1pW \1pW \1pW \1pWE^W0ث`n{\QLu0p5WTRb+b+b+b+b+b+b+b+b+b+b+b Ϝ'~D<{ `SlWn1p5W.Pb+b+b+b+b+b+b+b+b+b+b+bGWPp Z7W2 F*T \1pW \1pW \1pW \1pW \1pW \1pW \1pW \1pW \1pW \1pW \1pW \1pWW /Y~tJ;zv5hwi.Wxì=&~(PÇ\?䂵={> Vj~ W}2MtI oںgWT[egfd!Z0efG0?o/.V Hm; RzGRNMR[B\o;[Pj?dZ% gSLP|dzޞ"%Ƶ[t/Ncvh/.p*J5f~`b.'mjBσ*qwoiа讼kBnqB9f||vZfjm1nE ˃X%yWM͍u5BicYaŃA~܎yMqmtZ0*ܙ~t,~mnj2}!!iV4iDhBNQEq鞗sTxkQGc=?a1voO2׉}?Zg^p֟*Fm+d7̵r_YqJdNp+#`p#uEod7=\As*ÕqX9YaWt۟OZ%% >޽<85Cԭ˝C,o>aZ`ٛnAd[hFWQ x->O3-O/.#>CÍ%ibq6\/#Vl  |Vh.jw"U7.wA76V~+bo9eS{֍/.zd/!umv40xȒK?wx.Im%QıEyg3Kg"|qFTՆ/>k?9 g'B\S,%sşEFe@믶t[#gd\v'q/01мL~'S6Je!Vd`⡡/b[|1{5uq:Pu ~7ↇ^}^v}o YqhC+lcFӝdc_6׳M#>͙h6٠svn#4B[;DˠxĆKsNN6.υY}z/%醠UHWy( B OPiZ.;Z(?(|Ei zWuY=+i֡n_ Ɠ kRSjW+Sآ OBqpgeC%6K}ul߭0!sZD3(o67ױXLGoY/뺸8H m?h%' 񃡣N 9?$CnZ6q;jMUCۊxQG:om9E}*|GpQ0MdmT̿͏Ʒ*,D3AÃ庹 ʥ̿OrVk0ȹHBT7,{PfNpcr$u=mO%Kg:8v6D,G5[09>5F(h#:eH$6zvgl,tdp~O #woΔP-OI D15UZ͆[S@+ h٘c- /Uc/_BA >N L)](Ͻ ׃~v$Ȉ$cLL2TD|w? ~>ˆ=G?UR\䔥PDLv `M,0Q" ҔQC(lBA٘8gF[cp +p}[ C(M#|fL=6Uܒ;g,^9!%! HDe2ƊZj/c"۶7:H nL3x#h0r생_'qXSY>gyrڃgŏ̿7 xNx#QLf ͌w8 GtL'syP2YWmxj]yyJnׇW=ݾ3>:n_e;*.}nǴ-:9T~Ut>ٶ۴Eyݯބwu]?oC"˳8I\ I=aR҅|b#pJ܇bcEeL& qKHm;x{m7>wĶ=Eǣ~/F.sfQ'(%EIi)/#[*W[iJ'a2Q5#u+¥&LI8FҎYRZʃb]X/5^#mL=wDgz~43zO$Ie*||߮&PGEk^Rf }_NajGYR,ac ȓ(+$Arcp)vDs 4Az1 43;'^;"E |,$,aQiƝVc 3"*(O3@22Qw+(MoѠ;M֥ɺW@Ř'Bwå4|d BY`:(?IDS?ҴD/i"5;'YɲqA8qqgN^h ua˜ni狭R^Og_IkpUF,IXn[ޤB* 3n^l}˥zou95s_O"IU=ٽNuNV jɺV gaӹkHj|;i!c_pK2DJ/Ee~+P7_M_o^^b.Ͽ:|sXu#0 .mCXD=g^~>c7ߝj(?ELssJ7i}W__2`òAF!Z:qycMC{UT}M=.״j`'լ]wsC5$Z} .}>}8MɃ6b;JHk(5Vi'2V@Q,# Jh=:wZ"&:9JPY,>#r69?`RE%NjRb[474N#;0奄lwlhL'6О[ 0wNl :IssAS ˕N` r4!ͬG ZGgM"sFm$FeT $ۊ^܋EcyЈ=нȃdi^$҆*N`[,8>2 OU9X'UQlI"u4Q)( l=7ah:aFhNPƆbc,t'y YA t=<Ǿoo: jfZ̜1'DxQ:x_wS/G_7CX%4"AHhD<19?&6@{"L$2mD&JKfNI[cYJi5& FC^1!vqbGkkGc*tsB*t}ҞƢ8ATh4#IbIǝBaptZH#H'gJr' !rd0|*Ñe`b:m1nڤ\B+jf"jkҨaE*L<i KI I lDK9 Q 51qϳޞ,f 9gITWf/(r^hT6S5R !ȅE5° ̙0ZƔbLx2<hrO,fyÑw B3H@س<0bNwtP`'ӈIpݸ^+O#k-?뭲iDfRE"\L 1A X+CM0F/sX4GȈpD>Тf? NiϮkw X/dCx #&pYh xPRSfs\m|$8M9@UŸ30Gc* {uiodn@{Z@*s޵\㴗G-nֽ3 1IN>v6Db L7^css+ؤ7?GX,f`V-Bʹ7iuZZ67T"1t`1,>pMapΆf a@W-3󫺑*;AzX6}McT7,ŭ\IN0.Ed=MwZӰS5J=$˯L$ƜσεnˮwG3?O`iFt⃜=! Ӓ.:R}rkW/.k0yϢP9)oquea9Kq6`ڼНqhx8{?'\oҢJzAh0A/2L:M>YtI bmw$(Fme1ChE1wn*}unhzFMry[ϞN'`tg)ngca Iw /Ii}=*u y,5~;Nڵjz. kAr!/;TOA{;El*;tV~rz.d/ݘd-Y]M6Qbe+Mƣ KgBqяuEmȌ~ _?xD%{Y-p0d9-Ϧ @2U53w"b!˥N=`k$PYE*TTkh>{Ue~1 BMʄHw{4'!8S\r6?۾at#o%RSGBDe#TIi/R0B65Q+#҉iWE/>sܼ9IK)3,Laq0WQFlV[*L;10"5b:;*t PϹGfq۪8⊣.N'nO?i gߞ  e'}f*c&L`3xFxdGwA|Y*-c`o1rQ%҈RX!i8D5 '> oi-5s2TʘDQW0u`Ѩ,$EZd+/Nٻk\QS)EH&#N<JF̓0h׾tgܺLc`J)Jʥ>)記9eمrܹ)_0K'zANq)CD^Xωrk-yF듎9#$=ssߝ Ӂ/ԌQ0Z=xjŗ}d` dEdpL-Y5yWsJqVymhTOVF} H?Vȃ0쩷.kǃfDp)pL3Sil,PqEQEn5U$.I;\_wO/T ]{1Zd/r4 HKא2g0%νoAw)?M]56¹Ӌ{1y?-yo_`:s-KcP?)]tλͼa8p|u0](:xMMqi(NJ.>>#y3d) }vAkجӡ]J&]^t4|KS .37,CW♃Yxt<Cbnw/RvV"Hlݢ36J×uܼj֊~w~>z{2Y|b6td:#c?5aƿk d&\v=:lHgW3J?/n?\kc>\GZgNn^VD+S0>mh!L,F9al¥g+;#}+;?eg[a pق:Gir^G.|L:Pr RӢjkRr)i2)E\X?-H\PIH\r.{gջr[ ΆkRvboѬnx~4pϽ4ZS1겖o %|b6M>" Y=BR%x-&' <)R'\{1 CeEgMCoK^P.ʦ -gP59ٶaI(51VVH&/`fz6ON&OsHYD}b`, WĝA!x1Pv2/&ʌM1{P92B<-T&fEt)bL\DR ԫ*ոdl˅2 PqHW3^J3{[%ymy.OvANO>L/q6VfQqTd{rV!OM!\Q$ 'x -Jcn,(30`EBBRke79TV23?(n+3v583?ĕ7G.ԒmY+6=zv2:'+fsd[`g@REL3[j6.xT(B&ϐ)]aJ&rc&= T'j7a>ED!ƾ0b5ؖmeF##x9Z .yo˂b,-^0\cZRxeRu{[i'۬ A1 r}@H:indieE( j8ClP|WCwxq8JOjR%/r^9yq$]`.xav/W`t9s2fqr 0)Kay)x/|XM:‡|;>| {|ٙܨr|ᬌ+ޠn,~bm DǛ__6"ԻIj H\ZF!$2r-Z0̴Nj Dc!Bry!]!${QiN4rQA237LO e{X,9oCI Ir1 tj@$2q;t.3[ٳgB B{~qsw8bt˽}wneg(v/$WshBn]>:9,;m)KhtXp3A|띧ӓ`0f:ltd(6\f&g C fn2kH^;BJ傴.E&k3p!' X[D Yxo(!ea/6 >ҽNI2/.z;|%k0t]r}4OvBCL2K9o ަEi5ĔljϬRz.涬0$*[u -&QWϛk ĽsG&'~ ڨF:Z幉I8щu7꼹4wUz/ӷ'RsԓLi8ާiM_||&%]WG1uEMsnK~67o&'g'k ?Z-v^o?{~?=\_\_lpV 4v^2JkԐi]>-ھ7&!66 \s'@]>|8!If== ;^@uf+_LN~JI]5tmR.?Pwj Э6o G~4Wso {o]vܜRlz;8+ۓRɚ.W&FH -ũ x䨓JR&Mnσrؼ>+z}qJpqYP銗1JlO7^MCɸ`\ %?=71a9V]ZQvKji\֛FZFyhj4jуq \+ZRSWltU\báYNtUJw*(tFS/!4]=0GUEh {z$J+6ٵ#]m3s-~eaLB*Ks3r-2M7߯Stk'<.3R%J^rvi<əߣZ.ZΏ{%n5*{\*MU֢i^Ph4]PʑA,iˬf&џ}Z7)jRih؜MR<}OWO V-sqz.L?OB/fo^E\Qyj8^mZ c'F-c? \C֊އ9ԣC)jHY`J9*h;]GCɀUCWLvtE(jZ ɻ"U+P誠;]+e%#]]iRJKÞa] Z}0XPڑZUCW ]$5EC]ϰu]\xص]NW#]!]YЌUڡUA+tUPJ5ҕT1 1CW.UAe骠4c| 4٪!Ua(tEh ^AGGRZq;` aCWLU J-FC޵qc"{ܴ M d|V#Ub=䱬%{liiȏwCq05*-*eUBu^+&v+up5mQW -;N]Du5תK;YzBbuwXJ,w/Z$w1ڧ(Kuw5/OInU` %j -;uwN\/tn`ўӃ lWOV*e+gד>R;:0>:: ah>: %;1uEPWSWz ;真c!)`7àOr**SQ]T;ewS %Z$5!-!=a(9x Dqe~ߔJpmJh>uuPJѩRLI]`xkUKP[UBKO]%ةʹH]%LF]%\E]%%F1L6yqnU5*UBH^Zh&v]%5`B+:c%+"E*>Ap5jR{BOL() TW6 0oJpEkZIO]]%wQWtϮ}ǿ qVK̮CˎCȎh)e F]%E]%BJ(5KTW("Ox 0F$#L1K7\ܥwPń>;N}1 hPQjLhP;v|׬ҳh_oI<`UJ[eE`@7TƼf1 Wң%.dbn*wgghxKg(ts,s?q.:lMD(R%ο.B~.u5Z:;Kͳ~yp+-G7k0}E^8UֿFR- R1o6C[Gmo /}at`=Y|<Iv(9Hg6018vY4a#An2)Oc.*5X@Uv]̛p/R&fSgjTqg0^͋b4ڹ20ɀ\.0d[ۿF. ETzoח}K>W?MK._n*B[Erq^grX^r7nwW㩬Zn*wXNj୙֙]0ef0F]0έjpy/23f :=sF[ltjΆpfZ5/ěx4-ԺSrafTRiؖΌF} v0x<ӒXx S.,{od4'3Ld\KF N4p,e4ͪ]0^Wo{WU0?-Iu.a5罂sn[eK1ڧ\A*#=3LKpZ=0*Cg vHG ug%pI8P;ȏ%=cEeL"AFZOzj)؆fϩL2cBřM*쪐x3.l^/=y!c,xe) # F `,$ hHl^lUr'a2Q5JdeUj0!S&J;f2 µDX/5^#&ޖ󓡱kz0JH.Z܌mwיNws";f^ݟuog5r+Gwe҂a H, 01IIFF(ɍUjÕs@IÃ̍* 6+ב  DJNc2Egt`) JXˆB`LoJCc!P |y}Qr=F`5Xeakb CG"sJRI!:"# xd BY0P"EE'553$ Qhɸ(aYƽ+(dVs `S2@jv mC8gaWJaW3noD tn 5 Qr־2|۾5H_aEq=|B$\ZK+"F+0 Izk`ָ/)Q q̬绻K.7SZ{s,o£}mikX+0"#oRP,\u6%x#-߭V@P&I!vLh믡A+sEbi:)7鐳/71"Yo Mt~~6UeTG+nhiY|]i}s*$O[uZ2#|-~-\x^W0)̅q4mP-7m[Y(QY wBu%.!HV[LUX@!: Z0bzг2)T[*AK]UԺBT,IsQt1{iE qo\.o豔)CɍL̆y熓_ݫϯ~|W0 Z`\^/ {V@(,[+f~z37ׯS,˧݀uvs{fW?@_eu C٣e FN 孊Slߣ2BSU)rAOu—"x[.k׶\Vm6m6qCyҭh_&Rc6z˜Z c 2&SKx%bPzvɊ):,7^ҭ2}`MPkke[N͓Y C7v,e)\t`. ?vF{-y [474ncY/&ZC[ hϥ;CmB!2k?t{gK |s?iC%T|E쵫\ɴQǴWu>La=惌+[e6MbR&A ƙARL:Oxrf͓͞; Nǽ#4B49wk+w(1A 35ŕ>vd8;'M0w8^CbV~fzvt;ba}vn"ɭɱcGaG#Q% & \I[ZyN= \BY {9J#D=hn01Mm@iiD<9.ɋq-dFQbLx$ (Q|4'rfǩrw䚩Rx IUp;oB!O1b"i2i$Ÿ́+q_niJe]R3n?l*cޑKFz>TGe/T<.d!ڌa20҆EÓYG:05x05l' @,ͻ R٨p:M+m> /qނQXImtD[k$h얆>%_p_ d A,S{JF .]Q#QLq¸L1Y}tܛp1γ) O!H2T .CN@4ӎ~>J$^Q qF'aR/JQ,DDꥦ0+Cʘt18{D&7R:O&jwKn;|PYyn?HSvf LIqYLTF8,Î렖#oB:vNcF1ZleF͝QFG0غ 50RDGpcp[y70<_kcMQX;βn"CHAl)gy|̗ȹr<$>݋:7uG/ad]nVاR֢*龁g:9m9M۽^:H9X4n/h>ň+Hf,ugb(qYEfw )|d5~n÷ikHтDȴ_,")i@ytea}tRT<냰4pL  mn>M=vlz\o<gtjp{˟* ,3lE2\?RXv)`i~KilX`z~v^mO(ٸuEYdzpy*Mj^ :M.SHm8vb +h>,?m󋻤?~^9c2Wq?q_ߕ_؞bp4ٳ ;*(1xEãtI_NgU||Rd[~GS]7< |e^}/ k\ "a^R[n*7cV.ceZd艕- ů}>)C=S98W*cEK]ͷϮ "1% [yO?}x|w;\Gu_d¤eR3?]_P_r7 ̔{anvoIxt|{ӕXe.E+8_RT򦯮'ӏ=.iM~F4kayh(m#f$ 8Y]{J`Kyi(@jJh1Y'ius.hK"Ĺ҃A-qtkU<$ܢӄTɻJ!2=x`̕!n5qv!~ g'g+q?16wӧxZt޴2'2hu7֋cU5^]o=*y h5q|% m!Pf"9u: ]z8n?vCk1aۊmB>y +6M-]{^[WX=+}g6NwH< .q-jPZ K6>65/0ޒ_ pߟyGo85N*޹gd%l>9 OXZ l/ڄ!t(g:WKtm}fr?ڲҲ_4y Ejaҳ>h#C[ > mtf1&(%68ƜF 8MP(4{TDkoېQƭIМ[ߍ3o|Ojw 5|ˮO'*D;U66*fbB)t',%|r. MqJ#7(̅z˭f+qU͂{;;"!JmTEh2kY3Q$T1HԼyiRh8)2r a"p¢14qk!+ fXJ杬lgZي"u (̣ K;Ô4& ﳔ9:b<ԛ@Mx%wO?H5$qb֤.Q$ 2 6ya Ռ&B,XA8;>NZo@AZ1y}'E'.9͚a1cfH&;*Sj:s|dήw'|;iv\`36X1*.< C)i2= [[m>jЃvȾƧ*ߊyh8܍13f #EmRl&YqKR*T_hk0nr먦!Zlk\OtHN|~uV\ $ AnEHqčA\Ka lDI)휕yFc shUZOIBo{ C%T0p9c+4X12OE'(eoKؤoh(mhV.tB]Չ]iULGd^"G 0ʹd;n"񲈥n5nI]єⷽu<ّD@3G|x+%]aY@D0qFr `4w:dcR־wbPHˁ@*T_\y;{YD˴mlќ CЩ{fsU r^\NKwEMt(~4& l$ad)Y|֞G ̈)R2KQ3SS| .U5{8W5cP?|еN9!pYr/l4M*RkHk7[xK] Gw [x<\]ޯnpE?m-?n?,jL~*6I&p :H4~ Z)d YrѹBC,e' YFa@$Bާȭ4|4ʼn[%9뺅, G* ,]!\TT HRr.9qw;tɝ b.Cri{enԅUg=:*PXPLbS)r!˲\҉ǖ5TfQ:p-^ZI/zyDd$FHEH*c 8;_̈́`)N ڷoů'6Xip:8[tr||77$i筽3+^?:o_BAy): ƑDG "H(i:L& Y'ѤU=&C`VyljsK=l "yL6g._8W39ڞ8;>ajf Me_{_xM`;g󂌷,{.vˎ3;8;n?>;=|]Ɨ?K!X1Y渢HN 1u 4 %I0MM4@IfG9#8{0>M!fgZҎ}m{{cI,'ᯔM`yLpbfJeg0lwYicEȄ 9E|MBF &h)F ĹAEdq]c_*{D{#7UgȨC!`PLfb&B.Yy&FΑLI-R,gLH6$ )D Qs#KmE1P!kUg6kG/Nqui9z-o}R 샍䣦RȲ4 0pY~>bWa5?{paK[JOgxΑ;+kϸ_FѲw Gb?ᘨ~ج}^fѣp|!@llgg51k,؀a2F&艐v;G"BN!1UQ[UN *fc!$]x#FJ€*F$!!Ad3N mbȺ8;!'l2~%j}N6Wduw>+7{&4Je<#a!YRZMҁ`Q-0ØHGRND> Ju-tƼ{H9;9c^$ut$k5`nqK>Rjl4z6?y<6B4oZYL){,?~jмPp:x7O[d4Mip)8ܾh:lK l,eu8[BzI6Gv+ƽn.-F589BtaG& 9Lى T}XZ9`R,w?nGWqݏb 1~9FF>}8[C.zR8 K ߪ꩙9 ;~4n#LmV@;0IaHMn)g>yj*<'] GO!O'/׏&KL|opM7_?ȏL{*>R,2Q̋ 2, !Y|YvOѣEx~d>dT ߽o.}3n|a?/:y-Ox|gpC6,]'og?=l'o.p7Pwe8z!'o--o"hAf8wb<g/oVCl޺А\I׌\1.Vi\1\\Y Ќ\WKӊ\1c+44ʕSِ\1pf R4]1}tŔzQ⇜+k#+Ƶb .W$':@ *8R;:Պ\6H\1 \}6rE[= R0`a~ Zg$W(\I:Rj_X/sfҔ}9ŋWk}Ƣ׫ `F}̮.ogyK9 b܉dlg?hi. يL3ci\-64)Uޜ,a]Nj.^UxC?o/1:jQts"=zf<{4`DSׯ/2/dvWѝ].M3=s[=p-Rk T3[hɌ2 <@35f}Вc+j+"#(pI4]1c+"!ʕAXdCre =/Ubڏ0)͔ \Y BRCr7#WU+rZ)FvŔMrur %b +gԊ\1}tŔzQ!O-@8vAƕbZ.WLI\\(v`Cf֮֍)$W\- +aakWps!0ZBa4B3@$Wz-ìIKx{\}W\N㋋|__K2o^uGm |}yV3hܛDe2ah_G>A$iWrŸڴ"W]rmIGtpbȃhGw/F;)V4'+ AА\Xk׌\1.5#WLk h+J؆䊁{^kIcxbJ?!ʕ[ِ\Kt; "WLkG 2Q0Zط 2nh D.2LrurВ\xS Th O\-Ù`.j^\0҉qɕ WnZ`jTb\[+5rrŔ&:H26Gx03)N=!qHm|d@ F3۠(c[pXkf_@A}dP;o Y6IlsAN&{%4kC+hVcOҸ);DN+!nHؚfqiE6(Q8OҮ!b`o+ frŔkMru8re%b+uc0"P&:@Z+vDW뛉@]RN!ʕӚiHlkF)3<^Nrury[Hk R3w֎ S:9UކA;NE::ۊ\1m}2J/;\- _;5{~oS!0J?r'zKiw4r/*@" &iȎoew߂4~,C RA|oui{J74g>IܠȘ@ jE֌^ҚIP4yglCr`/B3rŸJ"WLKfrŔVNrurEFiifokbZ.WNrure$+%+fh+a+k !wܢn,3*P:!daMKK&NtŸdiڙLE(W*Ӕ\1ogqjAV~)n_Lru8r,Y+DWD+rZ'FvŔrZj|*l9XX {ڳ\ {~xw-=2Rl0@$Wz)\czȹ9ĸZL h74 Z–Oqz>+AǾE̞͠BCJMS Zrn7h Q1h(c`䀫n%cZiǞ12S&wr}Ң;Ռ\13v\\ii5$W :9f+1)v[o#WoeZC>hƛUV<[ %ZtMW^śry/ϟ?_枝s_д/W8ز ݻ'.: !@_ ']xYoD~u:AJ/9?z]ųeu:XK_}7muV$X^*ӛ׊_? {k[~3W?ɺܻ{nujg/1SW=[f)ލTb6N_ {5{?z)[}Y EBrO~@l4ì'>oޯ?{S~`0@0o{s4t]#/=-6 W u^O_^WX́˨=IFE9j*o^]E%iS E2i+5Q,&|+/OI;ʚE{U_^ù_20o ԟp)ZE%P]r&'RF⼶l22 Ib7V ׃KrAHE ItYkoSFS**:Ήbc~O0voliF#쇙tiK!W;Q`"`U8cq&؉$]B"UKF#CFZ ^ k*/|2Z0J:Jf:"cb1d]Rv#-4 5|y~~{j-u1$댐R8*3HTGTI х9 0X!b &3SJX3bd7:eu%!#I>t9(+g@EK3R{}Q]Sˆ&`:$%LE:B6L`BҰ&U`BE%S0TPM?)W 4fѱtw],onj+kὐ +G |d聢}~7bo6Yxc*vJ V!Ec}JM>!).ڰnnDim-ʦ m|W-/f$^f!) [Aioe;JR`ƨ%gaEƲV tAg QD ޹R]JvTJD-"Z %[Y]7G( BG$JR-Qld w%.!2hF'-sGKicnuB&XL>dݩp˘ ՐHuJʞe]}0a h#0Ni6k KR\9TE >{J]b-(Z*A*M]bn Vorg\,U+V#IPl0`&T8Q XAȠ(W_өr=2( _tAyX9p&a mDk(v~cyd'd!fjސwS01 !,@$rd(2E #i;,JdϝJ[SRt#,,tt0w?KQv0*;8dR0 Dg|?X% @a ~"ܭtE@8[NVjA@P{Si*3!QA.06")8iϒp+{@ʆ0BZ'oB`:M]Wh Fwu!(E]Әy_թJ!ѿ(Z%\CDA .a#xYl $8ownXs6#kE~,|ku]koG+?bV bI%`PψcԐ-gE=Hr#6nVߪu9u4a7tjy[[id6cVGr|1|,5h9b: 3L ߹a]-e޵i./Tir¨ȷZ.Dj#ja$%P;0a,vtmV28Ut)\]h)7VSc2(3Hv96V[/5~a=(9@i$@2ӲB JPaZU}4lh=DhEEYR:[Z#&@܊`mmW"Yԏ/X?ON_ŸӬ(al2JRp `|{dka؍q'SaA[w56}m6F84=].-Š<$ x—Ut FYum*Aʀv %tMކFL3yz5|< hw6Q$D#+ thAMGhͱKXd4H8KMF|d)ebzc"Õ,c<9ye>F/~ )dgZA.{mtЖEsi`J fB޼jPR\WoĽ"; oP[ 3]$yxt,2 cŐ%`&6֢1yحf0:-]9GI׿]gӲ1 1KDvEW@7]pD3 L[ M~)(;5,zhZ\@Gʪs7zS'-#|AaA]wàD,T@s1#%1BK6\D[9DKh wzTH>"Kk=xP@z=ja^#úEc4ByE r\O.DNa|;?$ ^ѭ `\ة !-JFJQ5zwֳPuHc-и `AEʪm̊SZ]FʄڍH 5h*\#|r\ΤXH#\5%i\4 Z`5#;jyѦaL śZ#@KvGV}pk m,~33#,+A-t1E zAB)#"8=ZMfPO,2 8'@󫆗TLZ6l!&XKo&b] 120+&$9&0"%N hF LBF t^\רvEC@Tv""DoaL`j?®z?Lr-n͖땶ITkW4*.tȠmkЖ˅ ޼=]pDN[#uhkXA3l__m_|1avYJgs`zv I[ēhU2]_o \݊(xlDI' ?Gl ggo9[vq|\?nor<:o/ePcYއu:]5-l -:N=X_>CUx!)`⇣Z@1RD%bP"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R\%NCRiI sH k{J Xi=)^ zR@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)^j!)`u-jԳW@/Q 䌲@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJr ~HJ (<9%_ +Y"%Q)fvH DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@z9JK뭼S mʹ-5mח7/ީXtpK`cFusL\m4\}iէYz3(^ W6Q^:b 6.k֮5|x:~ɪfP#ÅZ%v=Pp:]|ZglU#nO;N#Pt\"F7b!b9N<5ewH:]Y<:\ hWNo:rھn/Dgx?~j໋1uS&?J[.^>am맵o2v$tK!ܣv}/OFg~Hx[iv;r'텇3;6xO}aWY]ٔt^%rJO3~0@w)ʗߕ1tuv1mO O&[;瓲5fggXFekNt}min١ף_(l:W}fv:0lsڎBW~qܣwհ/^yq >r].\ܹlro{;k_'BއPѢ$@՗TBJ5<v ޥN&W[U,g룹мbp~_W~[5|ѩ}}xnQ@Cŧ `<ҺGh|ڟaySMd*X $ ] Ww4E b}2<E=A avacsm+ۧ!䐘)>m8:[29Ķ_ݲHєp3;ߊm؜kPq= #5nͽ@&/;`s|wn}2[@J@{ 9r@iq0t'H== y"c҈}I [ -秓:@C NI;os"iy/i>%wiЌ(9th 2Zar`SH;R)A'T1i R!t,fr{_u˝Q}LkQTîpU;mDt}7Lgbޕ$urS)\鄯Sʕα:\)³vG{*[%M)fmZLvwrb5>:F_}H\:|&j Z!t22`P#x0s⬯`ge /"==+Y>ZVtmnL&g lcr]v@R`R8T樉9>{(Y ]9j,(n(Jql'{M Cp-ŝVᘋR2@9=WpQYV.ku46 30̜{2UlXt9دd 鸋i7rwnWg^ԫQf[|\,˻{:Q: ߿/3nTޤk6riYz[ͺ44 V[x 3 Vwoww.(ts~&3;;٭k%_|wQ֝]\?sY\pQJ]L.B{ fϲxFt0s9#g3)ZJ%!+zVϯ/oG왡h.~[ΐ|7Eg ޵.q+Waퟓl-%4и?%9IUrJ5E2$e[INc %JdW Lwk/M-r ;iyJBd+rycuҐ5wA|~J+ uyw^ؿTMMIiQʃ%/EQFu4y^;Ck )1.mP)110V2hBDjPNI$9-r\p)oqq&sוw TMU6=:vk~@\9(;"GimRNDY^l{^FEf{M岼lu5UhFnK^%O0<]1&vld2^:DjvDqY:óٜN(g`=X.E.0m䢲xNn< EEiTPew}z(,̾X~[ܯD{{O}8./NبufIЦSZ+i I (VKcZ˩}m~j|\xy:?^ ήL~'`vnە"b&iO0^ cz|mO(od]7|}7v,HcE3Xfl9ѳ>'m3lmrz]7Vz>+B2+i)#cIxx>ʽr ;WX)2o"`^?/zy?sWyI?h)@7 sKcΕ^32!zߟ{9_faL?O&%PsX//+9^\ ݴzaqu#r(A]j]SˮM5G m~/Iso8 1! 7$ѺƐzƸFzLNS`?uCYˍ뒏Qe%!ցΘe\ V3T:Cwڥ߈ϟ۸ \dz`l1p!=0?-2}ȼ8Z4V:mwS`g槓Gll6؆[ $;+ض(4gu&zΖ;Y*u2ly\.ߏ˥JC_߬6<(" 13d}2J42i]G]\b$(%>8lT\JR"zd`H38t Lɹ']ҵEq|K2+築5J NvۓzWűCG tTEG7]pc<:"[A dyo1>$3Ch9so'HhdW<^KKE`#}9Lh+c’ӪKIEiz6^pפX4h#/~4y=h7.az`[VfyR@yZ|Iiy #fi$ 9Dk ׎lN%"*! FxPxyۤt>ŀ$^8$H!gah ֕ʍ`LdT2z h&UhBQ"Dά%sтZ .I8W̃ 1 xv..R |Y-S!pY\nMQ; ST,:eP7k3z/~~A|T9ϬxXL9YY*mX |({۝zt]6SOzӆ.78Ur:Ö;n>O'gϯhpG[JT5AŘ{%GMZ4 ?]㯌Cٺ2)ἷ׳VpfkL:{8-sؒ'z/c1|{75OfZZbp8f Ιt\k3G4WxuAK#q6vz{MsnZyNpF~_,~¯2,މ޵i8ݤ䭧edg<=}=ŽWq^hr5xH5.`ҧY᭍ptFդm\zn0 F3 ͑TՎޞy8h6";JnӆoʢћL%z'|4m&b($A\koKwp!ʔϨ5=+|c)`~u+zSD ϩqޖ^hn;<=ދ ;$ɾ}cm"~ýޗӆ%ni|R<\5~/G ޥx n!kRAx!vX^%va/rՃmT97H:BdBirm(uo!_o`~,0l*q=`%@sӢMeĽ0GWvHqu,O<%Zh$-J,"LFd6(Lt~3j70kdQ~Cwa-;u^VBDݎn7硻9[wo_x6/`3,z͵"p,0ڀH} }*Uڴ5NnWl˼ET(;'x;KfڋUC IqT(-4Yg>xoڠY)/=AtJ+QTekl-Zn ]1J0.z=EO``GO_9iMp?ͻSG-UDg(F&@ ~i`[8m-pA3D~ 1NK8f9HZ}tVQ"@%`#LxΫB\n F:! `UJ+g}D2r蔕D q~?<}H+㹒uq6},OӶm[ ,㻷8F6V{;h ` @ ë Z-uz$H~TU:Lh[|b,:Ʈv{÷=Of)С9X`H 7 XL+-}(i]V!d2uPFkDH!*'IB$1˹ -=A8- ں9w#ʢC)1R֗A^ԣ Y^T(xDq7K嚽ײ(ޫCN1feLd9KQb]6sLl@'T`NJIK匩&R_3@0*I"&g'MJ:e%x\fH ʙck. ԞQ&iM "4L&Q'9U0яp -eCI'it£Ww. -rC7PR#CV6$tb)ɤb0]Eb:U j:;J muI I[.MUSi,d3 :t*Wj<[VF˴)Vo A [h2i].hȦ3*f3= I[,g"|X'+ y *&H(2cY'R -1tMl5hd$TV ɻqa<9!UY:ԧ z:+ kpEQ~<#(qtHMޏTs.o}[__< Fi+xDKO'mbK9$0ʈUӬ1|\Jp&Yȣ̹*}L:oH.s*Su]uJ.%%37 )E | { xf(%#JaBe[ +ܪȋz!ʨʨZlX[xRJΏw~jvZp?sq9Mg{_0q^i8( {n{݀Wz|,Y/Ƹ<M?SOZ΃S w*?ʍEtC p~E6 Kr 6^F؄~״qfii/SfDC4fbLTՕQɐ3 'D2)Pnk2dRQZQ&z3ң̞TJy(y_Й8[mO o6WC΍{w֖YC]}rrU}s׍ٷ|/Ë娵,p4 S{sˉki8^7E'qܻo wN~=nYl89}ƨT R)}/!']cRpSbߏ$|=?چA=y% CBwEaVIٍC4l F4I5ˤeG]z]fw|$l?[I6(LMaBZ&gsҋ3(+K1ԗ-;uGQ\XKY`x &%Lyq2@2fб3qH0wφz`)W=~09&keu*83@s@Љ)f(bMDž| QiZ9dEUhyPrݬt6)K,91D:ļ3qd_ 쏦o9BpH(-9 &Tg9E +(SJ㦗/_Y|ayFB1,V,iab^[ `sT`-H{d<'D[oTz.fz=퓎y#@]Ȕ|-婾bj3u#,RE푎Mrc6'E_gmUΠJ@^m~Ё30*ChHj0{%}c@H?eSpskJ!'-E;wY&l T-5wld ª|1AakDz((YD2&NGB=5gEwQ%8߄uiwraZ*3(9@د3Yd&Jjb=VxH9S(zt|4-YTB4|\pbo;cu R:,0$GJ<UZe=9Bo19W,w0pUŕPJrJ7WfWU`gX\PJ{pU7W;Ap8WׁljkuqZ:p8)i <N=+VnJRnZ'_]맛p8]$K:fMԐwb4<əF7 0pAlen>q_^ ځv~j雍~ ^;^ 3e'`.d|`` X,/; Jϭ=|7y$9|}סrծ~OtXKm64k)R2L;2YrxYNKVC g5HH$N;[Geolk}>G^~ZGH:$(}aaY}^!`@o|B )$4݊ׯ_ϲ4omB-o[D zgKNd͊dgš*}UJ =|SjCWR[eUEIwR V X` `ઊƺ[W,Yy=\BfV l^;vq>wJ\WoAg)vB-p|WU]Uiޓ*5=\A2j+ب >?\U) u_UzQzk<>&ѠC+?h,ev |Xw*:*.⡀mv5+|7$qϖjۘ^m'%z]O.tk!-?M%Rn\P*C^D!. PTC %#(126eGiǞ'#H^WC^i m?[dfY$YjgA|9ɣq_Sݦrz74?w]<:V%YO_T/=MbttVRmCV7>Cwwai9C+/\vmi-3_6m?x6'Xj +I >+o0Qte^*y霢r&.Eh%[ LrkU xo iZB5HD:b+`1:J!#XdJk,uZٜ. S~Bk) kfw)?+:r)`kܿ >lН>i4}NH-ypV=u`,B*7Ƈ 8l|V%$ybE>1!V{T-RwT-r7+A 5N$|Lad,Pt:][X-'3p<*7ئM=yi{jOFS^{o,5&'/뙓k71_9gOxg{qݹ>/_դSF!S.ȦKhH؈H9K'Ъ7?4+`Pet5 Bgö&TC"tj~JA%QDiQ (J)tFzٓ@I("\gglY1r&s>_]>j||2m.np?n9_^ڬT>yI떮~/WfejY2-չD|Ff}!ob!RAQ.N58:%Pd`BOɻ20׍J`:YJc L| d<( .~S^3WbJΘ~?Ic4ɛt# mƧM"`+1o99=sdS2;U.d26Ɍ2eg lnz*%_\ɠevLSI9ѫB>d)9[@Q Mvgglg-|<\g Ѱ ÑTrw~9VbTu/zz?7U4lg5솮]/>}{[ hq<UWCk4}bX>[7ܺmZ?=?tC63axirˆ[6պ}=ޯzi|[Pwv܌y5Lc󢿥7t<.w>OڝQWbii.hO6k* _v'1ꞻw>C07GABu7؝(Qt+F&*X<ƽ{|cy },*]͙zښZyF"-Q*Hֆl!KQ"N$!PQRTdL0 uJ QifZR$@.b)(IYjgy; :(ɉSْZ/toWt&J2 o)mhX-g~{;xԭ6wrkV7e WQa]X1A +Y2cEV@DInV_B.;e]Q&EiMjUxa!b c!::5"{Vl`l ),GI*d|v+-e*ِ΋RL-YU# +CaUKb(0 "'*˿WKmM( EԉXKb@\*6TD1T̥0+R.CsHk6@@""Q$ l$:lwelJ"/Qˣ8,w ;ЦMI`.gibV~\ z>Ka,<Ɍ$?'xZUex\}gv5,>Lh+_gg), yCiLk1|d:eQչx@w~:͟wmKUPs:~akQ⊝=?*\-Fb[Iɶ)"(J%F"Y3tF$ t^][V`eE-6`MCR2Qx,70!}wQq^UP/gWɆ^ll CR_ KήǧTW+rqC'Hf&9-*VژY<"Lsyl#68追,۹] R0Ew.p90|뉓ڞu]7jY],A} 3f1f1>ٮ)T+A{] ׺YQi 9,oFr> >'8 ;n;n*`MRެ?,/g?7߿oN?_wox Lٗg} `fVPyT4niQz0rO>9]S" ܕ~  }__/ ]uyi"p׼ET}M=-`75~WTa~ݵ/L 8 PW ŭr#I 2X~Қ(JU c"j&P.xȂ`O-N(㝖 :? GOu,e)\<\ZIa٧39".!´+ {{i.b54&b hH);uMK9G8N˓u/ku6v.uyRw8Z#xi2ugZe LN.4#13[LU$CcG$$(uI<5;XGw~V`Ց9&z! {%,i=ヴL3f7("sI I/l0KxČhZ2h^.4 *ƨrRgG/6**b)7{줷Q5}1q ]ZA;sM3)pxo)<(x\ RJPj[:e9LB<{iwOEżws SH4k"#$ʴC2<8A y"'pܭϐq3]`sSAΕ-^isv`x6Jbxn23Ńe{*"ԔE(!aAE=( rN#DqxˠyĆǤx%1t,L &| dh5϶DMnz5#;tLHiOH?CMn +$&nD껯VJ?HJfdQ,zVwk,q? 0EV_cAA. [ 6 3 sqr(VP؆ݎX~T|Exʛ+ᢃŃZ\el>NKۃrGlh`2h-tjnFRdT?G%GsVXO% Ƙcl24i/9>GH(򹖜N)ޞ&Χ;ħͿ@~Y@P[<>Z\me|s۝D3̲^\qb>ŋY,`Lr8!~De8zWI$t6Nd5W/|UczaYf|% *, xscJ( DU4EZb}qW"/nE]IjGmL/_?آp U%Z6[/zzvWT1eJ11S~GrGWvwHTzwf=\os.xjU#CnEҚ Zݿ)ouǴ7Le-=07;:ZU-v 8I&]4ќj;'`O٨{vq$Rj@l*0 SC fCBЦf2jeDzEe O[E?,^5vOHZ `k2bkRp`I7rkۼUGN_?uxU4A\z[ڒ+XTONO?UQ㧧7ނƙariBDa3EQ9U )è;8}ni"Jia6TqIdqGFJڍ8X'UQl ihtS(sVA Tro0CJf/hl6&>I]\xVƳ}ryOSSYdg+xݠuVALW pG>iosP33Uu9#ou[哭{C%g9tHe -F1҈R|0VH%h -TäJ'0,pBJ!M$i34D:ٔ8#dC+၅A܏_[0{;b~)>9ǔG!yHbr46^HCEe,@HQwۘ涮fLc)5ata+γ`T 9)tX}9=8KA6kBBX1iI*D)a C䩘{ Ycg^ "`BMGȞ}<(]Ď&8SSDPTc"1`0 ^PM6u v?& ִ`a. XڃDe@1EBTA(`@& uP1fO;%MH4RE9V_Nab5Ki!"thp*5T4YT#ྲྀi9Rk 9N4V; ~ 5Nblt it';mk."'!OmQƖc7|$=C"B2YC-"gc$,(dd(BsIoP=`,wM[X kiX~͉hMJ[0|]Z{RqHHM)$Ux|\싪UDe9SJSs`K>r#$z|+IZ-J+G8,+nVhQVR=ٻڞHneWF|9'ipe9$ΗB~e l,04Є tSzɌ-R$ @SZ(YKl.Rzuee)F+]Eqc9YjݷVv{O*BtEK) B [9kYz3;Sh=]P۷w}{IMNNy,_ϣkvn7\kRW/ߟh8`wmx8*}`q9:(U7qp1:_ cg<p毞QL?v4AцؠAؐ ۳Dy&gQcޥW)J6Cذ>E e0"!| `)'-5`:$ym-~"IrNAT? flo2\J_w;LVձ9kݨ{dO:~}X-9C(NԳ|1(}T/YdؚVP3 p&U{Ĭ )]C-e/#ČJ SVPXP;#gՌjlEq;|t'3=7<_VY3wssɳ8XYo޺d+|l_.tjYx^IAv2xKRBT(.F*Pc Z|ߩf4dKJ%A&@ ͘cmcSqA&dj]Z3vFaatag.BՅ չWdfz=%=mr1gǓՋ?}ǣOGZ5HdIXHM H8lz alP4RR"E /C5|B jS f31rJ(gJd9ZXcwF;iM㮸cSMkmj- JQ3rW*6F¼Emp(b /"XI[GmIH.lYפ,0eɰ#(C*,l}5 ;#v}󸉪;M5X#Fjě8XujPcr(ڥ"GiPʚ9P2nB9!EFˤXꙐB$&VeKZVQu:mg|!'LK6ԋY^O(tk!o@14ZSYE04梤+jB/Ev}ч>| [brO=r0?svvQ5O@8ya9T%"Xtu!bl(.!I]k(ؠ*ƈ$6B>z~B>~@HE c&zM4Hcv*FD1X]1 9pB r`ū.Jf?Y=$~<{oBԺ+յy.n#lp`z.R︡R[q$TR?#uU 6̩g*]]1UW/P])FF]1V碮*@}WWJEW/Q]!a/G6ik98IeKS#>k:I8"^~O>\VB7 Xh@6X@FMp;,!Nuewht:O߿W3IcDrV"}kL-詯|<ɡQŅ` (KH+"9VV1jer_:LygKxZr?_gw΃QXiri=8M] a I|6}jqfM2{E?և0LN}_Wd鿝/] ;/ j Pב?6yJ%;JbcFK oDWCkwdO䌕n>v'=2g(D>ׂFa)_O. r/Q<T.`/VO9">WԷdJ;D{g'{˗et5f79/vcҋoW|Pjꟴi`x F?[7;oߜ}]x ξ !y'a9>:0][.vA¬6*~X<|mkGrHd0r0W,XSZ+XW|'c49+GQ\5ꪹYIMskfmlWԸ=؅h1qE/E! _j'~~pz{N?0;y7w_|H:x?޽w BnABkiŅkg|3LM=e3p\9aaGx'H8U r AC;Ǜv9ق1.b\$-ƍ'U!b3 *J~Edmlch#}&d'ɇͽs/NIRr>$MqF`|VjaGd ѐOYNzl\4GC`PX"aJG~=ce~Vf{g غ&$:JA^=\gO{S+ب\vYl}tϋCW&]UD%:bm0;YxoՍx%M ddÊ 6mϊ#e"%9:/lGr# /+=c ` 6ݞSɋZNI{'sAa Gt.i[W.ÅND"zOQDTB@N%Yr)˜@()%JGBvF]dVOaZ3Bsh Y$TE.FgNeS0Q!#XB[aya6 ߾>*)E4VD&t:1)(e ].(! %YK6[!6`Wʇ>eo;fhk:1G17)󓷽M=pstSk/qa7F*ؖοJ,\ t/ܽg}},bC$1S.EN V($[x>u c5x  x!C )T+N )|1錜AUԣBuas\[[#W_yMYǬR=NNJgUW¢/خ[7-:z"t2:%JϦP%M}fE,V*An+J@j̘KGp; {n&EU$-2m#,Q82%Gךg(u_7hvLH~hq^q).Ǣ6<zw+D0Gq@ z\=hq Zʛ-]Ґ)DH2Hg}p(vek5hœK:5aBeLXό$!\6L6W9yo{>MKݑ'Q2PHr1 $ETYIEdK r{prI0^0|+GrR+(Rb.elvF3G{GXRgҤ` !ps@1.0A)F2'R 8P3,K{_#g@.^/;GyyJ=Nors>Yv`[:5zџ:ӟL>!!h$!!$`6'qo#sD l5\ 2g<_^t^|m5zR|["YVGN 3+MW P Nƺ_ɫqQ5!+;_f}irS7; a YUΨm՝xTe~MP rl$_X2=1ռTƨffiPEб|pI8=4iF].~>g؆W6I~\~iv> o5NLp?,ClXH5?Og7kd~_-fya'&QMގe||812'T:U uf( ('hM4p4lL_7s?3>tj9ZM1Nn w~ 8gϰ6Q.{4r B Vc?q_oa?~'(LͣnLր{>Pc"h>T-͒M/ق/w /~,?^憿5c`l ê`ypJWw1#N戞?)E1[-LAt3#_z6OP/F;Ӕe`f.LBm0]/Ɂ{Z[Cldڊ&vYht ^>1{X:aVΖˤ$ʪͪO^;e56LD'%ȭRĽ7}BUBǛD NQ 'K14ȀJr3 Wh+mVQzvd:)tMU<`PD9!<%I%p;gS"7^H-N8<^WOoY/&~i>>hڸuL>Zt.z$kÉ, ^V}wj<'Ky&߹ahS2xo45jufZT@z8%9rdslN9?NXͮ68j&/pQI3wB NtЉYѲL_j\DLEָ-yVAI 97r+PדTC:}DxmGO ,wrcf;h `n@LW?^5+V"ӇFBJ$X< P=p]stmLt5cˀ?uH:2􈢿B윱a2HDL9ΉxQ, :XR Y (gQD]6 QNy ɳ1EvCwnK'B(FnDeɵ4y /;seB=e-o|5'h[L}WUc^%d./o(o =NDrT&*XD%''m*R/=1.sqc)=2=Pb"0NF!|j*#J A|4*E E̒5eT#hc+BniHBs.2b +gٓWf+- 5g=/."$x) N4EeK&R`2_PP%Y1 =g^@9im@ $.sPOb*C1`($ ^lAzU jU'zҚy$ H1M={RPa.*`TJ;%3 WEQ[2ECK{ލڦ 1J;Yei4egpu:M$&Lʝ mi3NXy(̷2"܃=dD=ō>2,z{.ZH 'dĘF0AclVCC-.fyc;V*kң7^_vڞ4@"[ zcfl"̍qt|yϝ-NgճO80ML_x|li_<~Z$!>!^Ed7]B! =Zy]eyH9EC$ )oKϚ9k5Kkgl;AdEH=,C۪ J.L, 叫xVX90K#VaUT/o?_Y6>hK^]- G&jUZMb|1͖w/^m&bNPUXxX}uMd:~ryZ(ߞ੗@$@q۸;bӎ,rd%{Cnc`5D-ڛނ5V? DYv#DyC2zՁHQ1QjI)I*5ZI2lŧ%NzcQU)PsH 1jMxN\ @1AZ]jumUp~TE{%Q0)Uܨa!EvJsB$(PHҤda["gBS"2p8sx1& PkuOKA"gϔbCSS5nу3?e`AJs"KX<1CD/X^.ZZ#BiA'Ttgjȑ_QuH 9}1jI-v#oh-J4I%:6,̇@~8%[EsnIATÃ4/)bLbcsvmB5HLiJfLM[}E)3vbc3 )^~؇%v~Tqfrz22K9)o|wxw(-HH{hC}%)ԥ۝{mRoyȍ葍/jU2=\ TA%Rn1{V{;OYۧxԜYR756B_.k* ck,Ԓ*)A-E "U'Aj@"fo߃;r=# `Cvz8wث?߼AD?~sϔ'W Ǧ&/L:4d(A'M "Q$86OSHg,di Y(dY8D-sŒG .qHLV Z0E(DZ,Q${&`=M,XN%Upi9Ǟ=6g{!{FڰJ4Kg:1*[bٲGI(H`L5ñ$Pjq*lMf1jxcNTd]G%y}lU%bAȡ22PG5>](-']I'6X3ipzy纭Nnws@u,dp_Z7Xݨ?M㻙]M "`R琬3QQ,(g_#` U"c$l6fC!"@u9' RkQj媬 hd&qø0,b!¸`7ݴЙmϫێGv\˓uON>?]gFl=[`j*Dej!wF&٠hFك$3%86Rߓo;{:㙝w6}#Ԟ*yCG؊i0baӺbΦE(8i0jӂ p5+J6,PYX6b #G>dLDCcn[ S2C jt *T1J)mRodMS-G&xvo";0ǡ#bZqA{f/{0~X{ƅ\\]B[c)sSTOdKoS q+-^J=i -et~0qfy':լkN{ô@\v,.^7[pqOyH{<e=rb5OJRPw炇ôc.xh'M\]o5rʳ?(Q{-я/ԡ_vMufN}> !UB2@H2&&V&@SNє n |4@8@HQ $"2)< ̶J,,[ ɗb0Ιav'c*ySƦ婀CUUL\l֥28R5J(f`f=L=!/8?#=&Y\;Z?ަUy>8=[B&-*᠋4PAc|r">/IpC6zZn۹gW)go9ke D\aI{~Ou]]Uz] ŵ9|?wWNƿvoZM}fW\sg]s =SV]&ǻʃ|ܯ[zu^a;BkR!(0bb%.{)dF>7zDT2z|7ru=3SC{Gh\tOV\緧u7r~Rz+ U#ڬx]L{ 8\7r<*`zK6:qz ,\,쉬-R2IͶUj4T}-PŚ[23{Qu+z+i}=|'G_y)#8kg9}FGѥ^חO"TYGBttt(zr_ͯO>=n6  W~?9dٕռ=Pmkŵ.L녘dLˮa)s NNûM2%V+2SjZ Sj`cBqXJs̨\/;a<5 ͩvށO+ˇɴ^ύ?w/ y!]`WKZ`fK݄z?'c+.O8wR"-pU2`{Ep;x=쪋_ \ui1[:k+Agշ  շeۤ%x6)W/pu۞^\u+k1UU~Ddvu>jCkU'=o>eha}p54UP+6G]]^]Zi\'LNß!DY_}]d=PVU>|^>֟TAY(|::;:竳h.ߺ)4Q`V m}1-R-2!@=j\iLu+ y8բ^~kеX??OK_]G5|}7иO M78B4q45ze?(Y.ec,C11X FSJ“LͱaOJ\_Mù%7jɲ;"vԜtjGA!Ge&nt6ǰ) 43ِsEP2jDBJeH$dLr@9CV,20qh:0 rT ]2p~yhk뎛|[ؿ]@YSKbY}Q^ sqE `(Pz묔`|iYEq%k--;r>|7a]EB+QB($2U$(44ZL)V+%ڡvM}=XUd:zT .QehPSbd&'?Άw`fkE={{aK{ bfSK(N3y MȞ!߄ܼobj+Ě]8b#M@N[*XJBiQ8J.?ϥA08AblPGQ^t3Ի1s*BևH2rQLWX.aq;"*ۇ zv\oj|aaQԏS4c쥨La(r&BUxaS-[cCƦ9!ZGlG k'>%{nKwX^& a)yq_ %/!XZC,!Kk5bi XZC,!Å3܅gx)/9Pɖ. SR%p D~ڨ8D(lЈ,)ClJ-3f 9]kEtj/Gc%xL ɖjM"D&آSbp„ڇ0|~vr\9wGϩ| l"lV*N*0+'jLUbP_բ&r\@Y:5T)L Ķd*>%Ξ_)b)eq!r2bnhCJF5)kjXS.RP7P}cӾ~C<~L,br@bb- ;&)Qӷzچ{KATÃ4/)bL=z9XV; ƶY{{&Hu4E%3 㭾]h؅r0߆{ű Vg3h|@gdnMʛ;ߝ>^|lJ D&)ZP_I uoxL'3xNfܝ:Ej tO 6eF-f["tc7.C4 m3AMZfT^xI Tz f"Rw-et'a 8_zo}@r?op>f2[I-0R*REpTN/֚$B1k=خ#ףY; ;R/yYZb෋B+*~4 wq^K.m s/RFCKлs·T &WL^)G>t`W?哗\&0.M&JPUA?Ki3y<3.' YoQ*N]UulsŒG 2rU$e&VN ޟ.zG9^.d#Cz$Rڃ?Ia_fؼbЛ`eښȒےI߷xtdHLrr$u(NUXKetVy]UUrugϳ* >kNsxn^Յ[ŨYMb.( /٨#7dBMjϵ(#$Q/ɉCO!$\{1 CZ;{=(fTsCVt)]{0:A6QP}POdإJj}WeLZ3=\&"^S8&ӥhw?+7jݎ('J0{eLJb&%0$U5Fc޳bc Beb^D" ΤK]LJf֝=ԫajd Me]{]xT]Q Sdw 4Fɟhh4 \cW.H*=c)W 2Yp0KHVh!XVSbD{. &R) Xn3(s 9ugq\31OEkWC6Vkn%A2:\لPc%-PB3F@LrƢ 5J<*d!2@.dH0%A#(i2qsVn}˸OE#VC5m{M⁌U%.yoˁb,,P.-MY9&Nଥ3A&U"gL 8)fm+D.zGNd%KZj8BZ[Y#Vw^'8m|դ@y]Y{Me \I-h;c| .yF1K$h 8aRfcSчդT!?LA:m>Og#9c𓽕v";)Ap}+kKY?q̹LލScF҅AﯯkB"c*G`ʡLdf! 2MBxԧo!$!-"SAsT ' * s)y/=ZWNٯ sڿ3dw^{Y=brcRJTN{Y>lyE,h23I/ُVB]dKż~Uv~|k/sD7Fi: M5\[zlW@h]ii_Q"*|lH.ҬR]YC{oYXsSvc7ZEI `6 AZ}zdzz F{$"灁hM; 2_F0VI8Ya#6N 4O_/ip-(:ާAuwi(oʑJ㒌 ]rD&Mq\PC }\ BuB4B Rp y\Bm|V1먽 ѓ-qs@9+UVA }J0l/M۴)_&\Sazd{ >zJ\-fpd{!O7LĔu#lM KaSeTh*_#3Sr 2EJ+2‰< kr-xf,rG gamYvs >5C]ظ^q-~'yԅJ`T.&- .<(bI-K* -W٣wV'˹q$8LVf8ٺD;n*hhQj1֝+IdXAWm:)<޲|R20G8䶈U1to"Vc4UQ[%P9A$a"pFr7Kf ]9qՓXT'I<&&i,yz _u,ڵ$|Lһ RGQ7ouQJ/yt;*]nɌ~lRɆJ._}T}ݕ/yvy(_w}˧۝:cGJ<xa< _=NڃQbh.,OYh6˼~ymWIZnmJ9/^7D[:՛-5":M.0~4RH+yuWn7m\ǢXђA4lDٺI ؿ$?A}zg0p9 D%@V ~adxDNp/(LE,Mr,nL  e w)q!X`h@zÒWcB>s,Ώ\;{zWd+rPWz2 m)m$Mq<˼i]<_lG3L>=`(c{f=-niqG2 z#fgFKe8sig)4'jd`=w]"G\`ma&d>G5ExVϹ~O\Lqt:?σk*,yۢV}|Fj!B)T? {&?܄RWs~=V*&̖J5įʆA^_8闿w~? }i,qjZðRzȤ|?4<ǟ,AY9Wgd nX8+4z8E]#|F5?~ڿiUijoٴDاiSsԋ ߣ]kݿ>l$ WU!H0 Еrn?6Y1Hh$q[tK> fk*  a}ON#O)X/dK2$mLr$3g0=ثijҦIx\Lƃp,:V0"e`&j\ƢʔlpQk28x9JdLk,1"u83־Z;_@\,J)Bhm@hRYEA?eV++.jQK|$)އ~cܤMy }}b .*(2`U$\A$$S"B:` 4>Lÿ_koۿדӯ& Jq6CKe=;^1V2~|Z4 }{fC, 5Nxɍu JFvY kR k kb k慆U@lB,cYo L@3w[.?W5Gc}5xN(OaslILœhqof/n-]W[۹9Qtvu|qxPJzT$hiQd,6(h d8Ճ瀤s-K,+1 gqw->|}r6K dWq;i46!9r_.>w m*tIQqpwƃ+Z_`mVwz?^:8܍;/sN';c]W@U=+;Y7O-mPPox-Y9ښ);2P6IoXȺsdC=Fg;.;[#P4UZ;%Z(a .x&d@HE=nG=s0Rm/M&=\cնJrEyً$ZI֎lh%9$C*#[,evC庒UAbSڧeq?.XQa 8kAQ>mVӎ_l+>TJFnĒeP,$"Fh%8:ZH#H'Vb0}]'D"$Zx×"XPIK 1Y!FqF-Il3gLpo5QX4iU08T8 'Hk05^JJ80'%Y,-,xGI4 GZ E%|稹RXz裂ɻp $;r}jhk5|װ<] >{d|A+88r!5Bx0l!8$-s&0̢1#O1FiG&3߉FIM#A))Ou Q{A?S" eTꈝUŸ3301=Fd xz/X4g?:^.z_|Tܸ3X^^V]\_ܛp9!OT,\g_MSCLEq A*ު9;֜LR)6|h(1yӲ*stC4Wi- %@%s@G$fO|Q92fRSnv1gMbx#8^MF8\뭑฼1>qRV>) f'UOO^&J5&~ջ42>??~46r8=: }u冢Qo)B\ΖJ_4Rsɫ~KX,U.c-33$E0CAT,z{=x= Pt<zv8JݤM# x2":zOA,R?E 7vCv1F{&O7+PşeQMWQ7\׳0 J2W >;>N' tg)ncc؋HJHu~դ 3.SYC ~^9Sf/Re- ӥ(DV B^prc٧9.JixZ*pK.F|XVJ:+tF|{<\dҥ~_ SZLA:{`*So9$v D~t6nf[iMݲcV$ z zm&o蘰}7 }v[$5-H셹^3Qm:WC|0X2mx:O}=%ޣO٩;tDy+:"Fj @1R0'@LFHYǣt"t1U G_dIK$cDaq0WQFlV[*L;1FXntzD.Wkq<7/w>޽2K-C|jw3LfIZ|넡x7Q'm\J!tN9\S%²Ks)=R2ޣ k k]EPIj#E8>2 OUn KPN{ 鄱0-!:(7‭iUt*B7zf!% 3Bs46\ LVՖsPRzާh2t K?>#xV^iivN_X B꼵/J@i9Ѥ)4* p/5qSプ䙙<1\&;L|p__^MnOwNnl 32(@f,lLKϊ]8s}B3;7$+ @L!mV]o*}KgD-!жH^!HzQ2tW*/nfd8-o&N^>\vV1e^}oW.f@j󖢭ށ\dsĂ/Fok'I]5ͤ؃d/ էGM]_J]Gwc!Lwb%Sb/ " @Wb$n~(d_z_2LniV)%s`HppoK)^_݋hx3&w0;IJد.G[kN_~8;#{{F=(}c0_V΃̡7E0>qZ~pӓU+c*dk'3=Kq?؃G1<*[;M^w-g@lbᗯ?_7~Zݟ$&vEh:OvW+BtEh"4?uZMSuEh"4]MW+BU= i?e|'ȃ )=.d!Or5M>HBTN{3|ϣ<>Q(OA ;f\ i#QxaĮ1Fa%YM:l۽D9DŽRn&x92J#"( kZ ?D hl"Skl ߐ'%0SchaF&[ d˼|t*Q19HU{kA$ȉ$gL\rTD8/bOÂJ.E2Uu X`DZj9yHWOSY"gm# C7ƻt>/ToB}V=yj LIc` ::˰c+j9 .hio.Io||{ : ̂KpV! TBXُW ԥJuM,-YY#43+_4sMlg=ZVܙ65WOMfj+o1m.ao rRu! t)M٤ۗd Mm6\vyx>t; iM*m1@ ($E(s-. 1,,ؾ2Ϟ h=ק L0" IXd'ƁʲnpƭR_/0п{ҿQnVg46p⯘m2w9fqwM sO41'd6ov 7k!ެJHreTu,F.j[; AvAq[pwG ݷ]wVDU$Hieu"&H&)"W~}<1ZȬ)YJ0=ƽV,8mR ٢Z YUe'яNҢč:>&cjwqZwY6M~_~?#O*H;`S$'ҵH%SЁZ% Q!r}5 ueV)qC3J2VhO EpiMC%҇*18\48I&rT@ dZ_Z9L7ǻkG~LޟV Ow#mQ]q?uNX/ hRB PcD96!ٍ~HY/A7O;:a?8GցV&ɨJ³ x-*29=P̹ƿN.>׶!1m)ui!qOrIX4ct>=\oZ7'Mv?Gf+||4r:C։;vX㍂nTQ4oIoy!I.&uߨP9^ܺMVWMezbU5QGCCgN [I/Qɥqg-y#bC0]-toK yRZcm۪uc]D\ֿ[\_z[jvcc*g'X6vRidS/GO_r8V}cv:ѧΛYG4~k{GSCu!eWg[,s#럟y?:Aa6y>F5D:̎tHq:c#&D4v9szr>4v:forQ-ߎIl5I/q5zsܼDX٤4ݸx~PJ8# dG'Or/h|VHǶkq ~:?"^*ߜ:=?ŋS~qxgjrޘ-\Rǘ&6]'E_ٻAc`׿coe~On[NV"r^t (]Fvc{rn;0)PoQ,w3A*[Uܳlwz~#ML ]skHZkwbigna0ck pRil!Z-ׅy$Lwuʛ.m-ݛ즲'/|>ݮ*BL0FG˛һK}_C{c暟M?^wt|k}3-WXS-e !AzOI=qP5O7/Ӭ佌j?q$] n\Yn1\e33znb=cz֓xBέ{ޥs#۔oYMaeգ ?`7&P٤No~]tbGJEF}pXvg̠lD/S҉8@`s/[3وST}⻖[$wwPcIASL)=Ēv;Vs7iJ`m$fOW%!|>4H2At 6ژj  RڧzW˖BC+@, OǃL¹%!Z'OBrV,iQD]XD@9V FVI9ϒ<ڙ`(UNC 8E` 2g?']UDqBw~lir<@ΣG!{ߡ7'ʝA;BJ$G$:1s IL>iTEe|(-]Gn ~!ΎO1rBBj琩*#x%T<,&GIƙ% ;kƙ4#GA9+%`vmHB TZJ9bgӅWӢQߞH)iR>Q0e_7D['28F2QZ=<kVIQ K il/2V/QCptq|;Vo'ݒћ'Cͪғo_S!9d0$n6y}|rf<]NwS %>|jm `Ŕh*} RֻTiޣTiދTi=$,E jNF 14(ׂ #n)#9VMV1HLxPcp@ :g(@,"@mrpkeBa[̜eQw}s|vUQU!H>xo^UhbքT 'lF 0;*YFE? ?,I}AJ'S\{{)sY4BHK" Wb~r%'KxESx .?>j.νՇ35ˋkQ,oEȯ(*PX :HҌQϝAY琬(LML41TH*F90IpBQ'fȃ=12lr'!(f*É6-22y7{ #v1s#M7EhJy}Q[Fm=`\xȹC0K,UrIDbi.ThDaViAG )`ybM SdH8"FB88ʖ~\̜x %20}FD3  wVfNA9_(az&s$,JF`I]g4%D*qIR) X8*p$r* 9+2g?"6>!..ŵ4YKŮ|Ҁ.xmS9ϒa[Yxp̫$5R#2g&ż/xxOG= FGzኰmEx@ܐDُVX~줍NscP٥^}Ibg/յ!/-?/ b3أrU28_ć+WlW?0kx\k_VMìT[qqX&4]sF c)YC}{6j =qß3 ] $";jcێ{țfy}ulX+mkTl75~'u{^5@AV-oV^ϕ$*!e 6QX]u{EQ~$/_^:%5!"S| jI|`D yۏTZs(޼gfͳsHSRLF#'pvxR(Bx5)o6a-4zW][u@_PFEE'rKWo]mqq.tn b/՟_eŘ$4Pe5bV})$  #I{7Sw|^\+ˢW.zmml>q 5n7"/-ǫ/LE~ vtp(wGh6Ah#a dc57:Eb9+8MrMr`jPˠm!̎ `qZ{s*ߕ-#q8J6^BďZeG]z]rwɶ|rG{ DJ(o2PJȄg FĘ%QDKI2[6h$`!S؈ -80JRQk"DR Fv7F)j_>媇g sM)"R(CVB .s*"QBB̠DQHdÅ|t ̈́RH *iY2D&B!&)1o5m3 &)m.;kH)PhΙ wH Ml|C* `8X-XT')U1:W6 Z1`' Ž_Tn,Wc/}>ɘE9D6ٲ:wv>g5TN )MFpҭ푌  Y{1 'E_emeΠ#w4tR~[9k+5hy8)-%εb 6[nУ\Yҹ@H+rX9H$-.Q IR4yH*a(4MsV4E: ?)yPޠ_FJsTth(RP:'J<*X(bQۈs]#uo*"|텔hea%Yj^wXb߶fGF*D0OoBמq~X8s0HrDJι+YRM3ՂǀT^cS6x#ur)\ІH)W_ygLD{ӵj's?_*;"U"VXzÆS`3ZpOOckafj^l[KkGVv'pqQi{8.qXB/Λ䢄ӚE\KIOz%?˭DFLpT\qRJyB$0T:o4Ux7OZT$VR'c?sa=76p*)cP IQ9Exj9nj}EmXmex9Bμߟsr}~pÆkrs7MYnLJ `2pfr< Ze=g6 ; ukjF]ڡL-}LŠժfΠ9\\ ~n;WT:AEGŷ?ߑ^%dpN"Ƌc?]+WtTR[3! Y<y?ɛok-n4U1pG?.nuhG0us~ 钢.ǀDd%7cJp>cUEG7ᣊ[b(GL5\BSx:\3%q|Lm`vvϽ4gwHl2Wt T fz`| >ͭ L9ؕʕR)YKon]3嗍3zqɷ3lgDrCY?3ڮ/pDYla9u `U&WCQWZM]]e*M^e y@ bF]er?u*S)Zu"Օ|a78"nG-bEQ>vzr,AUO5|%.>]dLD WU4dIc7uG/lxraPU8;tT,dz}]Zs&ɨJ³$x+ũ`N$@36w£/ ]?VݸUW]ß {?#rRȨ Ƨg5}wd _oa~;vMΝ.4S.lն͂`QD,ZYn Q!RV3Z2!>x?i )k-U&.؇⪟B) >Uw1aDMR|jO 7ea]Z{}趫azT*~E"|ybYrE> d a8#_uI:gJrN(^ޗhJ{MגD YG'$:ErrN4sQOrUQa6:?l>XR$(ȾUD^/.*d͙8F];ĠdY׆+\+q RMMs@}(ܞkgʏՅ7WE1Hlqa?w=;-7]p W{'?^BV4R%[X [݌,2;JGQ+2ѼEw|W4Y*#w:VWgAHF٣2> ȫTdO1b0qyc@yerỗȎӷ'?6|?雷N)ӧ'ߟ :(dr"B]?SzU8PrM?M ..sXK^:g*l?abׯX50=0\XjN~YiX޴iaU|v%m+vQOY"L22 !*^c1H׭$;ԼtDas봱 :i#R$cJ4R" 8ir)+Nz%G ф5ʬCD6{r׵X;ߍm|龎㐼'qqz$+DN u%a`tpBay2N#:8|$}1k kzPv?C'Fhx\o(Mwwb46oP !kjPT^s9|gYՆ&t Iӄꨁ (i J-R'JT4x!nx$QixV,uT@c:IFnxMDH64Յ' += +}k53ӦNxlʁ$XHڋ5,& *Hh(0ʛIEB"eKǽ%Ɛ DitHW%-ya),B (I( |')"G h]t'Isη’kF6lB߾>4mytd*Exb|?θQ0"@j&[//)p/d7 w{u1ǔP UnƗBSɛhXKbD颶 @8emfdmemgeMl1:${CSDx;kZt!0v&혜[(osܑo-X]Nݴn.[O3>]wqlYG"yKkY C9^0LP+8 QJ5C05GҲ9,氼?e'w`I+W,I# *  H&WC) JU3mX@D&P7A/N_|UNF$”g"4ӥC6JP+H9_w&@()Q P)1ZZXLi jg53Ow>y͂dLa[kƑ2䰽b\ @>&Azɱ_g$Y8Ҭl)v5U|XpL_3PgG }Ҝ} p\!(lLHH*w B%{ܘﰗ%V\A׊FlOW;>$FSPY\*&m]KtIe.hr[>\{.HX2jt,>N-Rgfޙ+P(O9|>FBӁٶgyۤϾϑ.λ~u D!3Pd N'Aa???_谲=DZA*!cG2 .G^).M2lY"_OЦh* |.(-mZRgaQg3q2.o#*rzac9wya{nt?QB[8@ʂD4c'!fńI^%#rTV RM5|R!Dt FCI"8'A"+U=)7g?=^\\Uuk[5 (gB7l{/ϳ#mΦ^N]]`EߪhF=GT/MA/3J\5Ȝu 561܌Đc ȦOQfЫRƘR d9W@ 0(T1d]t8_x~5 /VPלtdtvHf(HX4R(|T `fDFox)}  m@Qk$mp⳽R #^xh 0}fR,uW AYR`T=-*Ԁh$ʔF3f|C隽t{[Jg^+zj!E,m]^aGn{y,{ @Ol+sOAoo׶7[u3uy$%g!/%dSEL~<]O$.猃}f/XAU)h5gHXw{LvuqyY=\Tk&1J<ϫ(B-^yS׷lئ7?-F\ϟ?>= u;7+#-~Xso&fQQ3!}wRn=r[uLY;_9M+V=#=U| Y5 cwyY#ffYN|J.t}^`}Gē*Ԃow5oPVɏ)gnѿx~sp?noxr>٥K|hIL.?YtdxX O3\yUgN{tbV%X1?(_.܅쏂RϗY.JH1 wilXEx(t0*56\4/J9 V_%5l2_303t{ap2 i[7b.&Zr*p" u6U5]θM޾gXfi20U'4lYNۀ "1hv #X9C:ZlC3n dG݈t.( !OE%E N$Buڕ࣭hcI|v`A jA`0>DuҠvq/-j-kzPtv%P6ąo~UП9JaN$jF4bU y N$ٖ,(6ޣ7' H O2RQPxuHA٪6g.yA1䛭 A?7 >dփ Uip*+}O>9j"hX|2Uwӫ~<< Y}gj9u QfPAP*L1Y*)=R,ɐoK8#\rhsZad=)E!OA KF'pNflFՆ \Z‹wHx}9)* vlǍy;Z>k㪃Ozis׋=~|(.;ڊPdr.֙#(%1PBa1@p$ѢQsi14QώTR% Y2SbA%X2%;dx2׍)dzVD'6Qta4}dE&M7+29j|;c^(c=׶sv}ʗu1ρuM%촐:y0@ѱNKHd ,Y D1Q ZfF1f,/q2ero]t*mKd"5Ez(' *@k%eR2ĒbP1dkwcL-֚BbThV]O8THcć`l^~ eD1A D"d%D[E%-\`'&Zd+l~%wnm$*qELo _]'PHŘUɲPiС㒽T-N!݌ݾMNL>^-N֞Ū:\N~^&wwZPi{9|/ttk{?*tGtwʽb܇Gf;.%b iX Nh-cD>,]ZUHQd\mÎ$ss?j֣p9csCI\q{`e6{\^F1%B{^FԦiKEYNo1!#ʡ,7Gѯ?a{.~\'B+PhYr (R,=K-uhdeC:{>\ͨ]΋9JkVBs ,8V8^Zڟt&j믊éx]6PL չ|GWM~ԔϋU31j*'w4oACV׹BQXľ"N|?o-\W7qU0/[>Ow1y_?@~] mm62V&6v7|%|[iɩ?,nt;I&D׬зx&4w2(\||oY%5b(*/FrC#FbV;n㾘v/<'׳.L/g6X!Ħ;mT X SxZw,oS@؈eju@onչ*G~CJIHsREcTmx#m߽RUڄHJ抠+*h&٢nc>N܅U*AM|4ʧ-ի^aeܱjgeO߲ e=o@_URMn*A:lw=T_N7ٝvgU+vhJo6HmInq9L6._cm܇)~xg yw_Rn6Rps`n6un[{'}3ov:lm:lL}WoWb+'p'; jӝQ,EscZKs ]N~j/{we\0#zbÍL-$g/fnM#Kt[0&g6ş8繈.KbጂVxy^4%uۼsPzrn/t>.}.6 Վso$;O'4Ftj<$߮9E-^ђ[Is+#~"l#z2vkj;%t  !.'mmmF N2@frR r\]ղS΀>9: LWr}Z\uS8YqM•+5jߢܩƠXcNs9TOWtn\No,,Sx5&R%/YclX^17pVB c1a>m? ocLR'>66 qMZN`>N3/:%kL oWOeؿLW?DA׭%emIm^r2wuwvXanoq=V.zֿketH)CcXrʹ)V'Rc!E_la-"9@eɑ\rH|$G*Frt5cW$+R{% U:G\!ڨp%i"W$W\pEjac?9 oW(XIW$We+RkqE*{:\)*Lf+lt6BYpEj+R c0xȬQ0|&McoH{lLqu>2iSW; 62\g \Z]J1zW+k%6#\`g,)餖ۡT6BW+ge+g/\\ԊwJ9nYYu|`\{⾫Nj5gU7b`#} Ɲ?`炿ޜdm0nS@y149dIK?AF~ V6j߀jCHp~--&#\`U6wW3;t\J`J.A6#\`P gXP6|BF9J1nW(J H\pEj:u$Wg+@ƼHf HqE*qu2LkHvZ P95+RXc0|peSe+l I&DZ Tѻ:G\9.ϩ|AHN?NNOddU700\2#-zN*#\//&\pEj:Pڶ#WFoypv)% Xƚwn1b7LE'u9㢓 ϰdNà b5v-vt1S-̾N&+M6ڛL(#6'벉Pf!tS c$wix6BX.BG_y8PSj 5<~rQ?9=Z6a=RGq$W8`}ɕTuShJK,+Ր HCtc0x2(S0H]>}W(Wl+R{4Tq9J'!'\`OuLIGN*qur p js5l4cշ+۲0듻VLz}r7tROh'B̻peG\[\p  Q/%=hHe4 7N@-4:Jw=Zi o@7p̨\T߀*fp~ g+g.\\y.Bq*Nq,J aON * H}qE*]#@Y9u"($W0 H- iLfLd+˥WVJ)Gq 8`e6"B+R{:;p`EYa>VYKyQ;Z)]3Y^rR$ZTVEvu駟VFA]tF&ƠX_EUI?[>˯jBg5Xß.˟_zZ> [nt7|1l.4Z:A7u6H fsWjyE7[).V+ *yĶʺyջ} xeXSpOh뒪_+`liU7w4~(|,GAr#[vaa?ț9q|-G!>l!n0MFޢ~|c3T穈ëiyW| Vl^(}\leK/PJ\[Q-YH;@јˉ)H2 Z !b4] P4KIID&7_wZx,},<V$jg]節,k۶l}tW.H$,:E|ԥ Fa?ɟ|v 4fèiLU7*)$('@D+ЈK*g/ͫ{}k&bTgQ T,Jv>Bn߀**ܽO̓4Ϫe7-GpNTG*Y76E5ɍK6J |1).ɇ5i7л9Fd{r5nj9o3|1 |mBiLHnIʗ VCJ 6Y ЗI buRƗ ZR|^EbHuEQCBNFG6Tق1K(# X~:`ўJb=w|֑]Zy;a:ir@,S(P!=6:KP r=JNwh A^AkaFh~5Gta0P߸+e$Xtypl5MC`1uQ6`Ņ%̆4<ƺ2?K+5Rl{ XU>Y}rIi٘ȆRjPѦ"jх6[ r  (lj= RL1, v[]Ye; R5rwU #" +ʄ$ ŨQAQi+Jj*6 FDg7 Xm#d*zSJ|Ld'_t#fH57]Ґ?;1`uP/ BHP%D&T+Di?hx"W X)[K1 z,,x/:M;*x!{jBJB*)ԙH͇*QQ r 6lC@WJo7XQ;S )tiw,BE{Ty'D)J6NBRAAuTXԠA8(Yl,@'@Hu{PH&C@U+cG=D &23.}" ^ܣC{ߋqJb6(!9Yh>3(RAUvNZ'$̿2cLb[/-/kn w*ַYu_w FfL{T^:*KP˦tdUSIWo#f BX9Y4<#yp (hQ( {By[ɐ$RQd"5B5 C0a0Ft/1 JWLd:[nmG⭐C@8_} ,PFu5wNu2<&[WB(NCkdK|`EUv!ȕ'X1Xk3tdD4Hcc.u%m6@^,Z@mDBMuj WH% ep7P@R@"pc2ݏEx@rFEK֬kdž@|G:T}f=. >]bDuN M`!G!/CE7 F-)ޡ0ˢ#ɡb$UcbY/xp`\Sclci0Ih j hN\76xk+fnQtX$_:hփ*H]6 |t3Ag2vT VPXG%}g>Tm /fxs9G],zC*mP z0XAҢ5MD9 Z&PZ #WfBjZ)AH8I/Z{m(=mGc(utqHێ4JC`Ec6\WAҘ"&b9R Y 5Oyx*0uIJK't\5wĻ+ZPgT1j\?[wGNV*uhoǼE凧 Wf>JT/?w[&ۢ ߖ2#Ž_`#6?\JvZ߾9ҮOt_mntA01?37u}'۽9;m™DL71ࠦ+,th_;]eTr)U$Tvn"`4tp4~iQ:AJN&+&+hx.x:]1Jۡ+RzRgsՑ^:nz^uuڠ³Ց(_]z<]ݾVsMXy3mPavB -0:{ǿO7׻ۗ6%=nQE,H7~Iz?/..ThVO׼+uǏovWhq /_^"{wK߽7ۖWWGٿ|Ђ` _wyI25ܼx?m弋ͪN ~Ϗ?/<#+6:[+OT*d?#Ri"<8l`Q]6Q& '( t?OOBP&ͤw5Jt}kEsfkA{&ȗ-2ԟ{}־utvxv.e&_mo6B4*?Ѱ|аI4Oq|~<{lR NRt%6Gɮ'eIA^b?crޏ~m4TUWIQ{?!֒O (y>L!ퟍj˖r۔eyj4j DNTr 17K%hZ{%(-I%wDtev,? ]1\f+FkQ:S+ ƨMCW WYtN~tE1`'+N.MCW 7YѦի+JJqBW_\B2S1i&3 ]}.q( ] ]䍚d~b>_BlGҏCiNBJtiv) k+F鄮N2dDtŀnի+Fieۡ+ȡ7gJQϿvuਟ[]<3]K?]ҭ:)2GЕܡ*$&+"0 ] qZ (wBWHWsWkNʘsGƝEo}Rك#MZ}RN} 8MxhMG70`g 6k{j膯& 7ÍzZR_aNH`Dtŀc7*7 ]1ZNWdL{^lwǡ\3y ]y" dBWv[I1xtL'+=4 ]Qft(e$*BV*ZcIMCWKzZP(*Y֮W z3M1Q /#ޞ)y'90Ͻvu\|^(22mAWVs^Vy^[`3v^[(肉kn~Q NN[#a,g'QN(OxqW2+嘌c1Ur 4Í~Jhrkڞ+שw". {1uZyλx_|AcZxȑ_m00ego`, i#4jɱ~nIl8m[Iځ娛MX$i^_Yc%s)65OeosᆟBKe44ou~ /׌U5}UwmE n޴pG8.'v=ۡ⵩掐K~{G-؛9&9P7@ :-/xk"&DHFE+=oSr67 Í}x)\12hlJQgӸW(N/r9}oFt9I0Lst+>,\4|Dx6>\kA|eY""7r~>lFnv _&quji}o{p >:kͅ+/fNj|'m*^M#0dºw'?QٸRҋd;*+',_۷`7y%b~R~&JaCnyJMl2$w|2M&~?N? jzufW A>Ђ뾱⹖u烟:hX̪2}F=Z!M(rEI]%z;|>Ğo_P82>=[9װ\=Et]}P2?O1oqߴ@HK8kWSNl9*慘rĊZ:X1c8B, <^Os ( ݂oP=,j`ޖq1E] lS$" T ~H&Q# =h Ms%aԛ1ȼ΂qt D. ZS@# '7631LZMQ[%Qt :#~MOm'÷[3,q(Ф7~\]0jo"l2}?ly-;5%MoW9b>2t DfRA$2Γ`Ix뤱f=2uPɲ^ø5U Z6- 'X)(P@߅: >UkB}%4Suv>Lzacu"PHJ"HHm4׌:fq4(%XRƣ ,3zn*t )͒P>FAD)4x!:(8Y^;FpDDԭ MuzH)PcJh4PH4! ‚ՖaQ;K4zzC|*r|'OO`0kHagR?tIlB LeB9:5?*] eaf.C5PNsk^!O7+nljx;h&렌FT9KHQ@ocU3r^+)gOϪ*3buo(N g`=u DOPVlȍ8 nx]"(u*Uo7שW Y ?`jE%V՚| /\1fL,U \kX,~[ (Uso;}Z4sx a;?7^,Յ1꿚 w5.d~Ԍ-7c AV+\.#Wnww$kmI-]5ڛѢmfy: G0brrxqsVםlk~mXHnX"e:4ȑ!=ir4aT'T6.ۃ0毯}?|?_xWo~x(3^wxV8 9eivҫI=q(gzQ?0/RsF20Qqb+h.>{`~>~X gp{MΚMs#iZwՀѮ]vݿnٴ.bB b zGK^Cm㠏&>Ob:K'kycȄP*r:,{2.P~SʥS :]r*0Pre چN.3y[|輎"CIHVeףDO jo']{襱dyNu*X,gɾ9k'vb Hʎ=Se)?tvgBp{kVMr|FI[;{`U<;3Sö%h圍BW'c ,+P$I> I!'C$ÎрD , & 1h4 -@ {%li?Sz:A{ap$:H& :*.PmLT0ΓIyrSv΀$(σ#BLQ,7(M4qL&$8Q Hg|i),RJDhVhEPIFy@rFq>"%iއ^XCXLg 43ɈRdʈ& ƀqa^?θs Bz K tzl:_J39Wr8Px=O`}n6R_.6oN"xIubd+!9exS, lg i~ 0ب$!XBxlzMd =vX":zҟ ِI_R7--;He{XޟmoZYqmZbJ{HLɣA٘ꘓ\zLU~ ûҗ(5 ѡ Q йQD@NJx4$hb+.}=nvy`\Uyi!3/v [ 6;P fԢ^,aBr$/ɋ+K@| /pcAu"EC'e f;])G]{% U COmO2&w!zT d뜡FS`i+~NvmG ]hS\\͓U"W0*Ejz)m[тBy| d[1`AcR3)Ǥfj=cR3Ǥ~Ǥ2I~…s&PyoWeTlb C $fI"#y˪鰲碥{o9P%8k HJR46Oк0(t4&Mtt4`D[fyRdB&S#ģ\0(9D1%b;;#sB+8,Wt4rEnם?wmnp[.FM{jg\uqpEY-!cǨq* Rý$qm)@+v*R**ku3`epd3 ՚1w(T%3r8+tHm7X]]->E;Wg )4޳\NSEcPѩe -A$ ,菀V=G{'9"5L`X H 8Aud es9J*pәlt&u;5~jbozyo@Y"1Z(O;+ҞuNg96k^xѲuM9.tVA®}i~kkݽD`jE&|:3@^{^Y㫞u~+MAҪiBe t\S!RxtESp8euO29GKXXq;545o<ko}pPK׆5_=v7ծߔyJ}4 A{woCoynoL}e%CZK\SO'pNhM +*((T{ߨN9#Wf]'?0m*P,]{d:W OF;gL{eU"{ pZsڏ1]S Q(RK4aUB~-هq=[}ay[i6|?y9$=!y!xJ4I^")Dk@N1A( FxyyƧt О;$CJҬ%OV[PpO$2C5jNC>АX6j͙DLI$6OR@ %' ]/댜#6`gJ/+DatD$x>Wײ?,q<\5pSeK"1ǧL>!Xヤ&%$h E3 "8TȽ&&vYHO7g:_bzZUOy6̓F|zJr}pDq`PEB|9yNrNЁB@͓ omF(3k/^RX mS!lwнaٻ8n$W{Rd\. n>e bYR4#'b{^45hDI=V3&Yb?UlQ:ELGol@N;Ujg-C%Cu'R:; lod#ī/N|`;a<"*,VI,&1x96 [Ҩ`m VG1U#wٔY?]t@o?Y]RlgtX/bIş,Ao.綾ד"g6N{ҽ*\oa ye!N5Ejx=xz~V-trRcWoJ~}vvQoo}ײyfgЗc#^30p',FЫ;殙n}{X{nT9oJbǣv>s4 #˼Gg5AصhȳG 0Y3vio_Ǚuw0у/qU ys]yuu2աVN/^VZEcv=pw?>?|orR}׃y9y?h?d? CW\4OgqZQ_.k*~P\Nٯ%77UV(t=X8ɨ"0Q3Dó4UL]2E$;$%Aˠ.s+ Q$),C&();˜Ji < [gҘ63g}_Ah?pjksuc1}:ӛX?~C&fz\n7$1#_}R*;}>CމCF ԁ/9C'Sm2 78A٨՝U[>ӻ n2Yf2)9)`QT'i_l&Sۂ3FK2d}`)k]N)T p% ̹QV  \Nˋk2_K޸h50v}n0Pkb#Vy&˂ȨbtCa`o¨/Z,j_(-]?c/z ~/𘶐@db<]8IcN\(ya&['!B-HQ8;1NZ9%iIAQ)CQjɳEĄ(bN8!0vq&1()Z11 ZJV׻o5M;it@eh>ܘ/x^>!rLGY ݊H깫gPLc[p9c*!Ys䨭 yXB KZ6j\o+2 nlwz>kGzSUVӫץ'= a
٠]? ]>k0x1¹Ϋilc|D (kk|8?ï|kԗS?=:{xXTH9;HQFB:y۶2s(t9RTTNd .%I_w;l͝lLb= Fp27ҪSGrKdA#$%¤d1!|46>dM[ Q3tD 9$͟R*ʘ!҆HL bcGmf*uii>*h߿8ٵbH&(Yl}7crjQolR_M}/?NRG-6s0N,=R2@ "hi:B'X'ZMJIRːc-0ωJ9ZEJ0HFfllUaa+bc,#>foXoWvL/qv4:tt:~dF_D2ReD "35+e#Hԣd ACv*%JqR_.P=!(*ljBjz*9rD#RcnfNZ9=wZ]Q6=2Kr. (6N6~ܪX_sEzB7Hh*\O|3CU $*x=y?9c5z1ޯ?xyzʷ/~o~ӿnC򇓳 Uep~e. q?k%=?zkѤ0MϓWo&<''g+SkdA5?a{Wi Ddq\b; n0%r:+bR LFMmi@mn1KTmW .zJ-Hy#ჩmZ4pC ahr` {ިCW T+׷3F-`uvjoZ vpU\Fz>peg`Czo͵Ŭ>4׾?\U+ͨ|pV {Wl7pU}ص\Uk (#\=C" zj.}j<\U+GzpP t{WlTWb_Z;j+زd0=9扃YOWwI3,;pkKe$+kuA6䪌g1^s`ݚpm̟pf(M>`Uw/'g'[;gI Nhq7€E^zIp9Cp7\`!v,x3pC!SVIKWgמZu7=\4ŏU=duz_KsJz_?U,1?\oZ|hs7ДKy_+x7'?]^ⷻAun|Y[ӓV?*o0\|by%oZ75o8¼AI+v4+zQks2A"4iuk3M\LvcWcLpnS]T E=`?g桓cJΛްQuM A'ۢd_F^; >퍃Mi}EM9ӭ΅u-<r?l3P%'SȏR -WrBZa%gtt`3*i~OBGH{@ɊW]z ]?(ڹ.0r_uu<䜳 _bKʡt%Q+ u%v]prO6 +0J(zu&?Rv`ɮwSB_ YMFҕq4Zc+kvuB)o0q2.Fhu%k1=ݩ2~bpnYWh^ͣiza r9s4DZ`=EKẅ́f!diν~c[Z(Z-} rqٶ5oCŏ/M͠N!TcΞ-(ƮjO% La*s}Bm{Fp޼9-}j;޼ދS?ݼ=U~N凿qs;&k>?r~z;Ccc16&%7L6PsܵbZͫO6|G~Gx۽bd6) cX+.NfOHm#n֊I݀kY mKO@4nQځt%q](=u%::B]T# _ƫҕ:?6 VUWG+HW9/Ghg4JǨ+!t%ѕ~bEZ-8u-t`0\ZlBv*X@pZKוPQ*@vJpMEWBu%W]}5Ozf}Ӏ&ᆹifj]X1'ʯznד~g66=pB굖hsYlnFF*mxgԺ{[L ^3,V"6|wFo4)LrkJ#hB:_WF@k/ʝ iTrYv ]XϾ$}܋zZ2:F]mt`7Ꭳ+aʝ+u:]1:@+j] 3KוPzuev_[J8יQt%.]WBES֚t`ѕ+5?w%fWǨ+ّ+}48O᷉{⪫#UyS08BKוPcU 4`3Nv%<.A ~WĮ*8gfpi~ShLFv0AWast;SXޝ̼lZS6Z wؼ': -zuLHq֛a{4ڠ7ך7|AG~ ]8FW<ƥJ(=:B]yl7+5z5:J]qFʮat%~]6O" PW.ӀFW%5L1(z箶q+dҕq] wJhAPfWǨv#JFW5dWB;׽ЧQjuU4B0] [7B_>*"A$0ᚙ]Mt5.U]OrG5lf 7 w M ~%+uZ{k):ǽK&4iNjZ8,NCSU܏&>qgжaz^Πqn5zga5gԵggMn 7<t\ZK/҅;BPpҕ~Ѹis_rq(5:B]&e@0irQt%sF5S6ь] pjVʌ+5jJ^uurZ88\ãJhYҭ1먆Is]pEWBaJ:F]R*@0 ͠8v ]~\%hÏ>ZpY(לܾ曓ϥ0д"tS޽ͫ{]PObA+4ۇ}m_%U޽PWJgk0{ɶO7t~~QMc#凇7O׷˷hU:\\\!X^蝉α~mKN-e*g7tOFGBc>s>O{݀{x]ԞH~mkNҏھv4?NUޱVeO>-,tq|e8\f@ϋ|3#yN q u{dOw˿!"[%>zAn~\n~Sk6 O r݇HR֖q[EwhE Kx(#G]A#6ϔtC,@2Y.$m6T:#q Օ}۔6 1eC;u{55xٴPݴVYI10sV c"PB ׎ٻVAqU&SO}b  kjiWH%RĊ X!]A0ս6p.L񐰐|\(Q7O%#t*TS g ^X2 d0MҮ(7۫2U܌T J?,C 1u|k kHh( @,f:wlV2JukCxHAb# cGؐ#o)ؗeC%@vf9pu>J@Z) 3Xdb'pqR TlBF)0Ǥ9n/,YVRL(ِwL_(H~ BrFA굱eLޫW]D5}Ь IIJuI`rT^7F ˙05Pٛ"ZJ( `5E@FXv0 Cs+ZQCk>84i":38o#aP(fĥ*A'c UÉ1MYYp0iW9Igrz_ >dݸZ015l.@mzF:%$AuPmD~YLҕj@hJUac*J09 ~?((39FyjNHLĜV3.e(5.e>:|OCL}8|Y'1٭)r13Cn7<{A3YS Y?f}tU;k lZ .k$ SauDbXp.5>wdi*YrSuSkAcK%2A 1Q)_C݅Z1GMu R Ҙ "<6#-ѧhgwl-`Ujv7M<\d?:BhkԈb'6)~H%bG7V?q9 AcY`ȃK6^ l/- B{m`cmW[;]k,r Ф :9vFtJfJ2RdA1!d@rAV@;8 rZ,2N$9*agdO'r,dhͷ]6o?zXI3{mY4g& h%A r[v Ԕb:i[Fd^wׂEh ƿ!a:J` ت}!> 4J`^ R mr`=A;w ns2輘45XxMv1se&^=E[&*iI\B1IvPPvf먵h֔kM!Jfx%$zʤ轐ߟ)A[06G ÉrBSp2q -M\ГFnUk*E!` l*\v. `!c+_Wfk]cjH FPڀce$j W?u-AmCSZ^iVn/AjZխug5-&c6}9^]<;}zwӔ7kV ų=}y[-{F1"GVjhhzz-|N)-/󗣕_vH4R{7SSW驘2ƣ%X.O9MN?5ϗ_Kvqlҽy (]d}cЇqbW Yi|FFF5o_c]ώWJoC9XM򢡁CgWǣ:cU_5N"mA$XX'q-- t^h]K7{??so**(}u>)s֌-'|Zf=ǐ:7{VGj^>G8Z;fUWUUWHncN=h:PoEG쨷= uh_mn::}އ ٫ɒd-O-@k~ѫ㱱V }UlF KQV?]Qǖ_?^w[krh~v+7C_>'((V `b5B=jVC^b5z?*E(]t#vCi=(t.]O4Uo6^G4z7<]rC$a7[$媆&VfDcDOFg҂֜s Bo:?vQ(#n{[wĠm}H_GtN3X?~=QrQWS}z|VTo?VW݆GLOY{BnYe"\dx99=oyKZ$w$w ׷˓nVI (hђ%hᆡ VC? 'hP DWd ]\-BW֘C+Bi#+#v@tE4%CpC ]ZV tx aBç+I '"f0tEh]8t"A0]=Br+`]+B)#+/Rf@tE ]hBW@'?F 6]/*s_tEp ]2rGIWQ·(f0tEp` Zst ~J޲屐_3߉ j7]J2]ݵ{+< "á+5վ#t銾eX[GB+}JIPo|?RP&lXN!e_[ XirɎ}ƫ>{1)&5QԕG3Sf@[X}R4˳iw?r?buxcVm:UUc(M49l/|1\jV]eccte3l;|mW }Jt"6m#645|wz*=L>Ц&;43Gi,- =L;•b(tEh9t"3]=B[Ճ+BkZ1}^n0tEpC ]ї`>t"[LWN_{]\3UC+B#+UC+CWǡ (๫HWA*w&DW,pZ;".:]Jc+t]YMfŋZFlAf=#Y",NޟyWS^v9\)QjTlJPhC:]c{%ݫ^՞W$Ě/ۿa{ִ[W=@?*niQw2(JVUtًd/#UiZFg]BbI]쪴 zKޣ [v}KnY|~i'ίqK$9]K=9aE\؟'h'GN˄n8C[*[dTWA.;jڛlkBϷۯ9uw秅.Hh48t2eώ~{hvPr#B?l2U8,MhLBcmYd7*:AJ#C|v>><ȱa\yݜms{bY,Vu+>|PrN~e߈X; @\ugA>w3~z lqЦ|6iQ\1"JIZReҵ)H}:m@N^J6kl8ts٥0ŊAgĥPp'f̮o' |zmVSDyrޏP1L: _lN $7S29Ou+I\#h5өO*O>^ ~Zuv5R:m /]m)MN;й 2K_UvYĶ{˕?f?;dDu\ EC&{g}N''E'rV WɽYkBD*ꌊ*T1p;`l0"tsm"Eg?WrVC/Kqo|dV7d&gk$b\J`bا \{olRش3es0X'i_E&څh)[j$Γ6λx1_,{FFu,]8˫luvLNog]d6qv%"HLAeR 5z%FM/ދ\j#hc/ı{Fcg&tyݗ?[/KYM϶f<_N&Onˁ!#[ QZ}hC 9[|jcnMhõ۔mFNDt"v ޶`;?AG/f"wu>gk绅{<tT&]3NdcS ,:/Z-im]*ߣ{l&OwBn$֜'dpI_Ҭ/Dҧ/*Wcryt_k^}h#|bTa,w38,MW+FjSZKm_/G=K ^k&Ζͣ/TJ*岈) b<]XGo4-*/bA-.٤{-^[ۇF訕7fk盺]5k:[%l[GCgm#Bҏ+m$GE(췝l^4ff؞-4x%d KRRI-Wr&d_# c&R>k1cE`x= ^ Hk[V#UO}n~Ԭ<RWKOfGph M]\c1 3)5$f&?TofA;ۓ'SD,Asf |X؉Xɲi0GsdDFM^)*@'chmFYf7 A$0fD,6$Y&$pZxDnӑYrWgZIMƟ>ͪ#Vn>\]qȅR<>b')I1ǿ.0\3%aꚋ)FӋDmR>Ew>sP+'ͧnHO,3IWp$,Đ1;KEt,jfwK1%0Y&866()t$p8$Wg/QnJo@v`JzhGTjt"%?'lJ{U gB[ں[P ڌP[-uiG{d\=gT ˻]y-{brp7j\~ެJGNƻ!3ͯ_sڻ>/XZ]Otb n]ub΋5z5qr%hMFL@rBkAv) s@Fަx%yG革}ݩ֙h{\ǁwaVCxчH.M^dH?oXTIpX>@{b!l`&:AB' ZH&9YP!F䬅6¥ȲKJ C7 ::/]c,'Dk⑺ h%Ud&h&nߌW=Z AYb8ɜ&#`C\%DKb&F[ٙQ*+1#rA*[6HP'|bc j3kr>B(.1kiȃc!LRAqJ uȢR*6+#,(*d1bb:">rn0`)6!F6{r1C$^ t^RWhJ^Iq &sm1g8(e"R8rrckQ6DUYCL @.d\r>w*i2l&Ξ&6*΄z롰@x|뜶JI1S>*/a7%{d"0ȉҁkQG$JxЋ#B %1B*BuP)l#g,IəەWmo =N>ߌLI1^_CXԝm3\$~Z>=ZTʻ#_<(/B8A9.{cQk#قJRO&JxV7'&C`VyljsHH* "yL6g.b8W3ږ8{>aj8ʶPp)Qyۻ;žmq7cyoΟ5hp88~,%RV!|l91A:`9;iL]2uCq|L (fSj5 l @$nΑhle]M;Gi<]mtjmem{#حEtE-:JbP*+ TXQhi"3i4w>dLnIcg9cƨ@r!Q \LH!Z@Y*ɐ'-QX{Z5q`\"@vq̕bFɁvv.vqۈ>KpK*A mPRȲͲd 0pYvv\aq.fO`–5(: ~u;GU8urEpw=Hp=؏9tߏCxCC=65Pw[RٱX@,\s϶{>4RVMq.&g)hc38Ctj- o42onwǷ6_iߗ.&?_rxͶ ?'QBA!.LsBh9416VA}W@C*#2wr&,TY̒9h 8=X>T^Cɣ(UL^-I&8$$Ι+2Ypk%Fԇꄙo ]{NzetD4]&!f9߬?ּVZ{={)G>dS)_!K_fD{F὿cORyx_v4/i~fV 4t/~L]qEǓG ??I?>Xv-M8q{yNV9JJAsҋV2$IVBd$Y;O;oΐݛ?̱hQ(0/HWI <:ICM`wFknDйɟ\6̭s0;~vr.v~.nxQEMFә}K7釿f S'd2{j8s݀Fne$<AWO~=4^GI;=/eX'[BTChJqƣL gCNI#zDygz:()i+v3إ"ӓyYC.[6!u%lrk ƩRJ ~}YfBz{)aڔ)W>M:g-l'Ұ9OGGlLr>^3ϸ@xt|^P_ͧI)" nˢ))06,ZDXfO4;M4z[,{dQA"8[V@ҊNxGMXk^3qmgE8>[Rs<-ͩn]T͓y >f3?7oofӋe0 Ehqi>;֖.H?̦9= 3/n? ^\zH#]F,#; Ḍv f'1{)vT6Lrݨ D$d 9߆cM"Y.K2T%\ө7^7t߿;oKo~sw0Qgݛo4kzB4zn(=-~}fM\*yՔd_3PWz4Lt4o3n&X0=xapk迿aykCxTmmκ^-)o7>KYb7AdYب#]&'^KדDQj&XOQ d0rN{YPB% e1P'=a.e!M_ױE0rQ3G(w=I-@lho0JsC:u&a:Ly޶pbr@;rvJԺwuN:7&vVU]ӄ`ٞBw&ux݋x0F{&K)AQh"r TLx.gQ8A|=a+0ÁȜBG=!  '}'3禝p,if oPD+ǽF^`Zў\2h Ja1*0pK !!Gʍ^y;mTm_l HVp9 M3)p C7\pӮ|[Qi8/F^ڠLiL )*<`%{wG+DfOXPi/Gih譭D!52ha1)^ mC Ơ cJ#6 ېNL!Rz\[s%*۲P8w 5*PbSچα!#B n/ٮI&㞛ܝ?yJCޯc'=tDD+{|K (?p`1X9mIK\ԖQK-9+R'MᝐyT|ky0P{oXW{ w:? I?~1zMxv"T=$q>f{$e; ޣց"^SeZK͢(q,0+[jg}yޟ֊TG[B$'29c4ϼU:FAB)-X*ƂORhu|(Qfnl߳kRB &ins@}/Yk,mʦc|D=x/zj۔DӒeP,$"Fh908:ZH#H'gJrG GH\!11/1b8 CT%Zjc`B+jf"jn*cHT8 )5k/%%؈,s$#-3+i8j9^=m Hb2wEXQ9_aRk3Pq?~zGf<ȇc#R#1C2g,zjS>(aL1vp<Yz:qp$6.2% A1 c"=up QdQBuΓu o:-qĜSU3hLHH:2( Atke :9C[Q}kT3^-GC^BQۚ] `Sm{ [`≀FuϜRQ,#w$Z4.-̉"ZAq∵?^kcYap)[ۿq`3׷1YTӴ)U{V,HrpS<Ѥ[)g8ȑMKe猻嬋).Px jSb9w4n0$UT^?Gt ?g$SRӔZVt8E 0ޏӺ4OQM >6IU^}Ԭ|Uh977, 8K)`|L[?ao{iwӭbcg[u=KEɹ +vfGyp")}se{N%>ц jW99AC:|,jwͱAuDJM"*J*L>Hl@:LFH~Ҋ|r`A jAւ `<{N"iiR#L0*ʈujKÁi'Q -vL0M;~ipEy[5wo(J9-{Rw'DKNtm:KyVN$N /R=y^_J^&^OޘɻYp| {!Oh6G\ߟ~0h0(%_S HΝ߷;g%QؖTި*b2 ހсgoЇP}"V}e3@PI(82 OUn VPN{ (0-!j:(7‭iUT@B7zf!% 3Bs4672_ ,QY.F˥כM>]¹j/l=?}Y֛uLlYQ׊g^g^'p)z.uCWSDZ sgî<vUR]}6n Pѓ{ j{΋ATQƮ=ծ[)K.{0(uKe]xfŷ!(͆VAO?#)~bAX[|2&;[Ƣc\8 XŖy- ~YS" wTj:Qh󲘼ˬSSgZ/o}"=T2`K.-2a\<B>yM܃PV.i>ew `6%nr &uD%ԖhQjcoM0&8Wq9WvqoI-w#i]8ZZ8'(D=<*cX+£Ss@Id@L$0THFF{u "C0!B*Av S띱VN0{-#chnD8[&ۯ0~x51-&\ Bů GWŻy*~J+2IĥIfdS?Kt\4,Kpƴ\pTF,L QR0S vrb<&C$0S8 7NM&[8] U Yl\C 5KŅZOׅVͬ5I%px*I  ˑ"jo!)ctQRf~20,i4d_ gkN␕`'Z@ \.ru8VlʄwnUM.[txG**=EέP玸U;l-];Wr#hU]92)DK`\LW4I.Rjbn][5\n7-ﭞHr3> 5oNR8nFuwt\[dsY;P~ts󷬾Gmg/F.|!.ۊMQQ 4$#ijHr\:Bz\Grgu|AqBsmyшU 3(*Ed: -ШMt+c~v!wo:A ^Sk\;YR0HA+TpwmI_&ԏu`\g cTHʖARE;c8~TӉ结ZMWr%/6,'YمmXjkڅb K$Oe %@s` :GB)qM_FzB@$T`Tx ,AX,5ʐHE.jjbb[s@ؑTԉm-hjSZwqQL?V6,$)J:gɅ23ϭ_?϶UۦMe\oݩSo"x[q3y6SϳFe ې X99z$#;37:/Pű)wo-I++yDg#>5x> ˋi:8E %Rղ8Iޝcjl $&?^,\RNd98ytT߯-u5úQbysNɴQ҃e:_=_9:-uNVjɮV$|[2tVyKT䃫czDi} yCJ0RkueIS\7'߽w盗''/{sB>yW'o^]/8qc8/ ,I4_ g-~|9 ߎ>>jWgn%;1I(B&;o_v W?޴ilmkإigߡ]IrG.IlןJh=J΄j7&'Ϯ'c:mltd$NȅXD- ;6xH p)+5%> #O QPGPy# Nh#,Of8\砒u&yLֱcNwܷpy[xp< P3 |?;k`v''%5T4%j2ɰM"hH'ĉ瀂s28tl9KT@c :IFnxMDH6N?xt8)= Q90Kr>ESeH BդqpWNkꍲVGBcHAic ׄHtΨDž!蠑`K网s_ Y (I+( q ãj.ˇ 9fb}|3x=O' !x5:2&cq0JpƭR h2¿DnOgC1@V:Cvyf0=aHfHvRړJ]uf|)ĻE>Ǖ^Ri%Qm$"*N/pr{kAć4(ܣ{ȸHbŚAԉ BxpTu`=(cl&Rl#  ];6Q:z5C鐒>t-\p+{+s3R˺mG웸MSg 0ֲR޵9i#h7tID@c@Rٖдҽp|HpD:({zY6D7g:gqx~(>I3oQXJq+9߽izn1y?~t_2\uKnjĎμm ,|)oeZ.5U=Qw`mں wјjA20U|U]p0:z.|BIDL/sg<x`OaYY+ @\֭=wym]?da[Gy;{"sܯޡ^̏ ɮ2|\]p\yc)s%P< )hpc]axЈ ÃhlDo%Ms(IŮoZAr3Ôyxg\8scc ItL*$ <#.5Zks 3%|9w.IC:\-23:WᲮm1@'Z[L}gvybqZYD)ѭnw{A1bԆ(Y&mw`N`&c0z`oHN;kd4Jc,o5Eg:EKbLx-2U=3Fg}|Pw- ܇u:b^1hOY ]?VY͑gz#}2&r"JkE>P4j%#@!A1*v~}V>fcUCm(T։ƽVYpڤ$bQ'6n *9ЛM@|ncCCwu/PԷf7rw߃ȞRB<3pSvOxtjXʧZ4IFU%5<ǖGEsR&'`97Dm_9Y=g'7<w:"/}+2gG J[ /BLb8in(WJ-~~: ٟh~QL?diz-&K Qmd~1˔1Q ƕg\&E~\4t7 v+|Zś<ݼ(,r8܅ aǥMһrµKxgQ*7u5tQ>У|Wk!nX'9Uqz[YA=3|SSG[}7ʳ[>٩ zܶ $/*  cy> PIӗڹ[9zXg\Q8+0۷ϋWhFXw^]ʍe/&yE\g_}|- 2I]%8^IՔUvj޾bzUݿ)NM.i^FRqKTNBNg8jwۋy,Fq≯BߴSm\ڙs"GDB8l5M`d^4-yjM]m+o>']l._wz3Ht@FDKf~_q$5e di20st{&Q2e^{{Xl2evl@ j $?n4iu PSh;*2a7mj;Ҽ^&z^U豽P7 VU{kPYS3}0Cm4ȵ;'jH7]. e7ȣ.Hp)eg!v;W?}_'kʱ۳<ګWcq5ܕf(gEW>_ܧP֫,DiOA`.!怘2M7H?]/Eh`%bq}: Y%FZW7KFq X;nV*>z !"2#*l:E}S[2-K0k iX6‰|5hT*%.Jgv}c@!{ @s.>5m%l-GbT4&u $glνvDojȕM766iɃ)kWwRo@Wz? N{(1jtzQX$NUT+$8̱GdPZ\%f& j)N⤍A$&OM> qm̚>PBPpTQfXe#<9.cTn:sV|dn |?iY 1$dL%y')5ePEbpˈp+C۪!=uS}u65y,qorU*4<,gڪ\JAԢ@YܒGgRcƆ|zq,MC:5/o$ K6Gc\* &$Lܸ'Et'(3nxx'JJND= :g=[J\,EFZJ()] S_oRTP m#5*hbNhŚ4;%n6%KF #sLC!@\Q9uLA'nލ{s?NǗVoQ5m=< Szȵj(R !5 Ю`ϪWmAVjlNڈZ2j` cJ$^`_ظXTDZ*;%Ύ̾ R(Sg-Ѽ|(!UgN{jXWŰt*GkM+~]J څnC벶kE|f\@kZ&ChB.U)by#.qy4\!nI,wKʽjIllL6Ď,[FIh+j 3NbZ>@b]8؈ls+4!oxOmhHMIZo[:$:i@qNv k2A>qterKsr^y}~A?Y&h-!TfAu1UtYbm25 *Sݾ>u'75&vOԱ9.B=vy=^agO&XX2aAm_gWKen ZR@f`fl^~[#[Qv|;7Oؿƻc9}ߐXEe@OC`S(L *ѣ܋ ,]!K;Bv,3f#F0EbJ%k]"/S+Gg55|B91{W *ߌ̅@[*M5k#N$RNR):mdgMw!ʕas/:h a&M6HL1z\)-E\Ia&6Xmk *D բ@#- ՚XiXm:.m)q~sNJG u/i $DG1g?^o־ vЧ" HtvփJZ\A!Rr1f"B7GG!qD(lK.QFmΚ+0V̠RDHT7Al,f0\+0b5Rax%D Wm7 '(vRU3&ΎDsJy8;?qL?{HUf[ܘns ?xqR\e  *׈L$W]*(e=Uieu|ur{>K{q#5o.m X,ue](vy{YO%ӓ!ɿz4 {@<8}Ն,I,  __6:?jjdy[߿)|)n0aߢU  y*UJ#Ń>俞ǿvkxxQPtK븚|oc(g. 6C9:XDQ\s4EE1܊p-8x2X )8Nz;NSRRS085NkZϝsi=wZϝs 㭊!An.ν=M!9CLr1<8@`׃P< FAh(UB-Ub^mƩRHA J)U'HBچ]ybqv帙Wl-r:#RiZo'\uVzTHT޿O'tD(PGφ>=4!}IFvkR^!u|\(U q.P *C h$ڋb<%>)' =`KgFL773lJ"& 1Ԫt;ji2#Uy_uT2kdrb4P`0WER=#=Cp9[Kr{T^0@-Ŧk-Qill-CRĵ=}Nz-r"ai"f;?\;?zmWQ3yeh1 N% g*+ m jAew8PSm bP^o@ 8':SE }JSʅ_J1@M4!:N.X5=&.i\;h -X=zdAԣi.][z<|")ng۾KL1dI7sp~LTgY{8+.#!:upI6Xbs吊EUσCRP`KivWwWUWUEC?E/~>S`sҽ`40Is$L [p0A>J$^Q qž>g^XILq'u iRmZuas3]Vmٽt>#y2Ps9CtLl|$Pg2YgvLcE-GPA-RӘG"h@QAsg K 5@T!1wn (g}t<gE\d{ڬS,I$RI|=Vs'9[9~#?6͑Kc"v*wj*詑(:B/k5f,`ZFL&PHVHKDrgg锳m@FNH@Id0 5-&0_JZ?~6+i3wn4OnγR@-p<'8tҽEQF̋i1u"maOp*/A`geNz:t@/PƔ%8D6HJR"تH >O,FePu+¥&LIX8FҎY | \K8άGg:gO7=_ydp)ڶ~y-nzRQ˶hp,)q%rF0f < HB$7V) WbGAsp+r7)c5mP^G*#R"%aC[HfEwZI%PD0@AP(@2(/=>BJq1=^>A 96(@K"yqм)m.诺3xJzdì"[ DFG+{<4 ],\ f&PmdۨmsY!"CC#i`ԥ%K)>ٸcSҩG`T`r ޼Nŷ^`.^7©XYp)Yy1TeF-A(5[0rfg??~; 77W4G'+2Ha|UvPdma({meyCΆ񚡩b -u{[Ev5q|{\j1% Q CEa@}Ci-WuɽNf#ѷ5}m ĚiiY.B}g39rgťrέI' bU<;y.7m/:r̭B&MF*j1#S'ăȉN<-;pGwx #s qM s$Nz2םς7c0>H4cy"X9=7ȐԊG̈\(rSV)4FƐ:>zT!$UHQ+`'p/nI,s4G ~G՘W`=rb<{9n[y, SH4 DFH ie2U=jLq&@/h砂e8d|3:Cc3}&渥ΐq3a sBKN;7̭8喗{$U7ց"Ss͢1p8 e\ޅYȇ?~ YkE*$HLD"B-FAȳB)-X ͂݊}:qy(SX0dYn~90(II<"oYNx0@ϭ1 z,ґ[$$8gA_XHDDraptĵFNPτ80zppFcb g _ c`A&-5pdƉKp&ZQ0sV%pNCSg2+&O֠.jp NJ`#XZYh6d1=;Җ^ ɻ2@w-UfsԵCRle><]QL}=1A>Ġu[HF9fS˘#PZ6 { Ogg=< G|oɗ: (= FQ)|FIGQsN8O׍&zF61g<[e9ӈͤDA!D0X+CMH)^x&9oeOQїĺ.ZGCvu/kn`wؤȎ.x".E3gF'1j(;GXk`^B)SbVPǰs8bm0>=, RV=56f6 WWwҌPr")ǸcLӦEt+"ZqODEU?& g<*}U{3O!*+;^4AxؙaY~Tn(ܤY5b—yW8=ƓI~Uܙ~֥EtE5/R3ZNQՇOy5'޸Aa`7 EȑcRgj ZѸb6Ϫ$. Ƙcl:sؚPav79Qf[ث]*2TРJiaGmv7 U/c>ɀ NjW!uT1 YnW]h2[G*Tt27I~kg`>IdoCrKh}>wt 27eGe_-WS4ԓ4pЌo3)a`V! Kf:+T$v{#L+q]Ujq3RNӲ Ulڜ/]z;ih#qU՛ijɟxDFnq0U(_VWs 0oYP+<{GW:LL57جY  R]iY*\Q1moc: M`I=h5]w5 .l6z嚓4ϲtXQ7ȁΑH# BDe#TIi) Q'Ӛɨtb2/7H~A]up0HZHe1k2bkRp`I7r=% t}~T:ÍE5=> vP.]^L׃TGl=%Dv¡Mu&M}~"{vs͙"Wި*b2 ހёgoǖQXUe3@PIIȂ^Ite*Q7M+(=]wXEjŖN5r areqc*h*`*l0CJf/hX# }"JX"\Xϭ+ }rs!vcz/?1z.:ӫϣ^KĴ>O; ( 33Uu9#o7'.9A| RiC{C A4 FC>Zp*cpqWiJJpp^yPҔ<]Yś15uhv *;{sHv/ؒIJԇ$hIi aR,tRrQvf^f]_޴v_,?ߞWt7S~vQ_gݬjFITJL-25~}%/%f=D;_λtS]\ovE`M $_r%%TYj̞8w}m)UY4Mkt"Wd v mRuYOՙvl}]gAuqjAIQ.ƤWYZ/8iiI>93*%@WM̅R BF)θ7p,Ue.6'AtrC`IR%grc 9fdM31Mh:Yr;"_Q^}[lk|<`j67cլQ[39aFQuZ4FLǒs9Vջkfȱnš˻ ?"}ɴ{ɧ"~s$z}B–lrdUx:r )]!MbLXq`,G5K)7gfE#G'[>-اT6ܬ'ɽ5! ٣aԹmw`hVL2ץ)/-X{_d Οoy/sg ؟Z󱂁@JPKNR$fӏZz7h+d"j&j"f=خwO &H]q0^mߜ(q_WH9#MnH3Xq5rY춑瘓FeWYYSQZuqTFb*M4kP]9.RopDv0͏c&b؈r4X RezJM)ҋ3K1[4ƱCPOM]&ZV `gظ:ͳJas&bm7P6e\ʥSաNś^gk\lEexw=ōx̪Gi1INQ+}}6Mu@]h6gCskbc-p4]Ip[㉲OxT%!j%LR[ Oѕ2y -zb!!eEDȠx+"e@! U!&3IP+`LJИe`ySM $eꢁR+Zo#UbԱ {֋90:<㚲qI׻߼eX?3Jxp/<9D:HT,j $E:+Lac*ŭn"O^凛&y}Ƽ_{v[r5gFPuGwzܲŮ']:2 e 4JYUk2UZu=.,=URrz;/%zGPx(yѧ Xwd;kw~z'.~}ow03WeSÞȏ{l1zϰ7r܎7ݖ|_ΥXAqʣkwfMn-1KРաI5Vc+-diɉ:1A[znWI!G)K/d{?I_sh츈Axvy__=q=9kH] ][ŭ="n=_Q{b'`1lύ5Uc7u-5"YM mk^l`6-E2IjL+IT@'[>\LWܝ|?vM;3t+9O 7Ѕ*+$C1EC´QBgJOّXbX+X5T.IȘfG՗j):Icbh%tbrlKQ/]2j~P86rC,Yc"M J698g}[EjlߛO)y~ Ē2HkI,i%C .GnbL7dcQUgZUJmrf"ט840Xkl6"3m)lS!o~e3h'ןdpZ,, W= f . e*n:G\uZ^L_Axe Oމ`,/ܨ>qy}قe/cзKwz'2R^ r(LSR{vBgp"6R #X ~.0\~H};ɩԮ-;HYi.Xd\ua\Z6'*7\! +a\uv\ukUWIf⊬@R֎apoz+L4\Qpծ;0m:C\s# a\\0 Zg׎eJWPpJύd3|`5 6fsf5{~̚.Cv7oW]%Ȇ3 3R0Ɍ+WVY;J Wg+ܿ{l8ry`v\u7\!V• ].Qpڻʰ+]L+ƜW}}Kmj^?!ߧM+^-J|8A˪AfwMErM7-,t[;tӤaX5]>|#;dY٥?ulp0q4+߯,Ӗ[D^C1:Tlo+T,{\޵OA2>|ݤP3.y1F: lp6HF/g)Oc.*5X@Uv՟L̫VnKćHjGpɯ>Ja*qa_|0:o{ @iw>¨3ri3@ Bg0YG#N}57}?ӟaVndj\{mUpZYI*`|V$Ku3/ƲO|~No.$#-@28ew3 !`[xdCj&J,%Ѡt7)R>3vm2&CKem=[mFeJΫ0 *;|yga` lo<ɤ- 5Y0N,8Bl6)ϻ]L\KF(K0sTlPkxUO79\r:=JA) Waz{ (53ۜ0>UB Rao<'_TS2چTozoLƨ'LTtU]z 8ˀ,VQR`*ƌƁZ 1bWL#,8ZI gg5rq%3M5ۤgm"- GeyB dcAza=, =ԓ g~Tv>` ^p\ :NC!MV /0҆E)ZB(+-(^[>]9b@cro!_A R٨0JH kvEǘ6:\Y-vDsL(QmGs+:2,"1vVJHCC4F\ö[hbCCou5q աnUjua};Cz`)*Bn~LrzvZ q񪩪Ԇo6^k#ho7^*[W&8AGYmA̡q=01EwQko;/y9m~QDGR0BPpr*QDOq_` ~ F*=K4u `M,0Q" `S, ? #t&rH# be}0z)#"b1h#2&"2lgFHSd::&`iӂΝ-ڨo|a^Ufa4U$moo`9!:& 6>3Qᬳ ;#o肖VV%ịUh 62( |]GT tΞԀPEL9KVx< ޓ{l)Gï8yY Ąh?OeoFc7{^dx Iy+zgNM?5UXR띱Vc&ye4zl"Bj4BZ"ZƝgRήy4X"ޑ0,= a-0k5 nQho5ٖ]Is#͍?rΛyF~x-Jټv˩[Jٴ wqʋ 0Y:|t/"ba 5(|a TaiIwCM8@;.ngY>E&s>a=o) '̬zχhNm?!'HW>eHz"p$bP/bt;tPp|?o\Wp@lYlV٦ZC6M_Ν|GBͦWLXll)T3HNqkݼG+^_yU0{Թ%U+g5oŽIZ9ѝ%ꊉInSۨ?.&qOa8G-K vh d%_Q/FݣZB9-ѮσB7sQ-/"gzˢG+ Z(Sl9~ĵD*o8)H6LxJ{u 'M"?oǿ{vB7X[_X´8cEL& qG9;{ :D?S HRZ"Y/HJ$<(l^lUr'a2QG:Rb&y$ ViǬQFYJy>%okYq3#lM&dWJDI`Rr^z)dq{h&vxU/kΪ}rNɖGYb,ac ȓ(+$Arcp)vD[dL14^uPx͊<,kH2j(fEwZI%HD0 AP$@2q+AGA]1300 ʫmw֚1A1iʽå43D.YCP99:i_4#HBȄ(MLR°{WQ*g4 l K1'AjWנsVZ{y<,kQ@Jд L)36v3]w" )yu~/;|b G)"¬3c4!iMs\qS$ ĸy=wտg7C ;9ēOn佺ANE}^z)L]; CF$II&bLY\3Pó0(85p9 eB .]$. Yupx8.pՙH#|mQ_Dpҭ_* DsE OotJAov! W]`+z`Bb =F;U_sΏƋnPdCsAC[c[ ~;/{fA=g5IcKooi inFfч8먔HWˁ.mٮ)*A[]vզԄLa)#axEb'-\ukz,U;**돪ן7߿|Nj߿D]o~|VO0VPux&!{}Rzg<|1K7W/I0+E:wa#Oi`(l!{p鿽yykMC{˦b4-bhz.7vAOUn Q!-%t\1'kNAPc6z˜Z c 2&SKx%bKRd1@8oXʢS#;\b~$`lx0o0JsC>봲S|rOm2xb#ր4ԶiB]3;pynjQ1ZSXu4|LCJ5}ԝr506h)G cg <UI&"ǰLxH:$tJi0U)Cp [TG>s$K{%ix"if oPD+ǽF^`Zў˴e6-.* *ƨrRgG/6**b)7{줷QzkU@Ni),s@hIsBh{i0QA|5敠6X@Ĭu'ayajyBsܱ>YAiDFH ie2%Ԙ Lۡ߁/'Q ^'k':űgo8@l&,#4B[;DcX#68&5ŵأ>ɹ\WHǴ1mرrVN-<ۆolw[ w)nlB|q:ԯ _379|Q@ӉEQ X$apN`+ݘXw>E{)?Nr<1x#|V\_).9xfNI[cТJi1?,nt‚l ðR>}۰@ 8-펾O>@K l}kgz㸕_iwdE,6p`V,Kʌd[bχdY#M$Jn۝g:<Ů4 yt@2i%%Fr:TaTMYZQNS:HəIiBGAfk30 0`MC&km-7 l栽"Ar K9#5 bP!m9vV[Xr2FT16=87#|`wH-ɧk|;䈻Q[.ݎ9~7o x Zdjs%p) 1ł k6smU?A,^ zvBτ<HC﹥;jn"8StJe I&Z#CEJJ(8!Oit=fL #7Fٳ$_KBN֤UM,-w7 :ALwf'gZ}G][wdwwp ; x}秐kaoƃ`5\.sR\:PS$5UrQ >'pUg]GUVTmҕ3dtVDMVsIR:ij[4F]`lӳXgM،Ujh0nurеFDfn^+1еtm7U1nG(Y/f<^/H>IGCfo#/Lj4Gé}7yuӝjU+5/f:;9=m1[=97m(]0;Fn~_^,Rmq1tݗ6wީm6Xx!ыk :<-W'3YۤU u]"&-N y4e1Q&t[S ;̜ O l/l64y~ّصӣ(A(diOO?kOWݡhr/d^}>r@* d}ѯc3<~j4φB+ݜAMRcRt.̵Wӿf>aMd:E6Y"% (nǮ{dnI}[ײllզ? cɟvtx;;vw>_f7k狕&kkx;Wo.8k]/.S'<腕\Nnה; وZ<9!\,%tnOy|:Cʬ `L\";2T+G`.[|FĔ'BT%e[Vke1MsZ."qͨ+VFsw0tLEAú)mxÈT%L˛O&ſRKB6V0pښ֌!v&U٪,@o9sGO~v`"&!e@+֦%zfaKDRRδ )霵:)NuU&km ĐH[w:h_dv2j=2@q1g݆O;W|ёȇ-R*i8*h)1(ڌ.d\ /lJشÌmN+RPCplP0%W\nxVQ ->+zb7٪)7YF 8 ic rl3uᵇa -,ZX N};t8>[\uknybgC&pQv,m8 a1m>uE Dq:pHAmPH^k { lV`c]BHN+2:O5 F| D):ǒ}dTod6W醅,XHMX]Q,3~}s xRbΎoy<::ytOz8dt2"aj66!(+h$%` i C"܆Ŭ0kӴqL:d!T+\M7'pFl>9WBeEmFs+$,,BTTdX*M4&VD %SH&T&!Q!rIB+nù~1X}CgD "Nxk#i Vhcjj̡P̭ !hvmf 1qxBlߪ#ظX4*TG2B]p䱸,%nJ 愽N݈uG͂ˍ:Y=qQ6ℋ;,tb˳>ql2[$ת%/H 5J5:T3cXu h0GU3_qUŐGܔQ!ُXlaKMs$y49%zJ|~䉐nt~D~AgJ.󊕷XII+"mעJA:w Vț06A(jB[Q(PLvUtD]7E B|Bgfm8{&B.8ŘP~@uxeյ|-Wa.<`!kk11`1q-5x1Z`+4M#qER2};|RL[utlikV 8w$u[qa)v2FcUmfϱ=9ӻ3V4/՚4:N8=}:sn-Iwbx`kwx>fW] 2^^ޜO /ut5ν|+WHUrQCvlW{'{5]~uQt- sΙ\fmvƛ`#K-Y"sHv|3|lʏ_Pٰ}GыxvXRhsRƤ5&HwQ"jt4QۯØ//cc(~dUUtkoZTJd U (I(c#EI-?`b~_׾㜆 b[m=Ši9 -z M[Z4E܃s,8w 4ɧlnX{U2ʽ-CZ |O7wY h~"cA%iX]T!PH窌A%U3evTL\45T0-EI?܂eT˃> ~>]?{ƍl|? {NvSl[4-m`ik#Kpf5,'@xC;<$M[ Vy~3jxRǁt09ɘ AۈVۗ G&|ӥjebV-y&| qӷvraxMt6dQN)93Z'%kw1ق} } 흓ti.ɹ%@qs:〭1T)b+t@3e^ wE% @Uɚu,킩\u,/|H&]_xtQ4} l ,[ЂfԗsKSWsLk2UW2V闟~v )3 Xk!<]gk.Zmi=r~]" E% wCf M郣Q%OޛчIuxc=f2 K߾ݏZȃ;uRd2(PZ3@_rV؈%rf~ T <(xÂO~[%UeliW?A ˛ws ֢ 4|T|}VzcqִDMfZx2J{ d?|Фe^e.Ύ[;9΀d8#5lA/Lр4g3'8MxWM<%ap Q"D6wDiCS-w^OC"hŊ'9˜FdLx+Lca4 l1[sL(QmGs+:2,"1rbrX+/Itļ1r6Gj:.x>7&x_^֡wO6-R{L D.VtdEj_ R[N>F?z z?z?z,utvztJRW@&`U"vAPuUWQ]QN/ `8cX9/b|!F)g_~> 㲨oxqEȚH $Nϝɸ-: vFmfpb e$|Ps>w4E@:=$EǸ/'qY~wc.R˱vɹ`1X>q~I,_Kٛ{"؄(>gY H(򹖜N)bxЄ$+NNr:tfɊ]%J Kj~(HUHt$\-{=HŞti_u`U"WCQWZM^*9jQ]1r$M4DtuUWP]qʄ`)D"Z^^rq{^bwiI p^](Wge1{WvWIC3JW>oP~CVvB|V(:jy]rHzsS=)B)B)B)B)B)Bwo~xZE*\fL {;Cd޷Зo*y)+q:Fڈƈ>kCHB1KL.ye{>6[ZûX|L2A0ɵ r B($bh՞2#,RFXp4jf>}B:_qd)mݔ?Srl }xzհ-zy[X{&WJMصDDB?#Q_v2AB׸?CPk褰 .|oT3d!Ʀ[r1҆E+`þ 3@5FۦC4Aہ# ?-ͧ|h#HagŒ+!m> /qނ(/,N4eMh$4t_R1>ur]3L?&XcL1ƳRKoՂ:C5 /-Xc+_ K|h3!b9n>ڸ+01yP3j<q/_f؀I">DIΘ BȩFqں~|碥yah`Z&4u X`DXl$XP#BXYL^jʈhA #(H8H7~6Ffa H8N'XGEn͡uZLWe=wnuCnl`9!:& 6>3Qᬳ ;#o肖VF%ịUh4BkM̨3h눊`) NPB2 Kpcl~ ~kWנLQ 5Eτ,Eyd9;"G'<ߛ67+<+2_+r/ 9rrN4%e#.ʹĺ4H+-q#8k`*HRVaZ30{-#c)Fs+%a1r!\7OM#VGw$$2 KOcxKZ:e,0*M1`>W)fWRB3S_]c;=~>L /y:`tҺŨ'+0 ]}ߟ$T, [s/*V6ٯ(|̦vpв9aRS8s>>aQX]Lvk}G鏐25O22uÇhlv!$,\ᔌΔ#aCg`t琲"<:ᕙ(OWn2^jv\Ɩ6l]6lqJ\MP>) ;V"Po<-QFv0a9Pv6"MuZoQoV7u+ sKīf*k˪vHw[QDw&UZ?)^[P:ES,zhC2ݎ]?EP2yŭWnx3Y/|."T'm60"F Fl[U_NOO׃G9h()EmU ki,>0>[q&$ /Ty-*Sx ~-T?o/=b0tz۹*ǖ.HzO;W7n1Z⤶%Wt] oFf~Ғyuo=ٹuNV Z괓ugAHF)2>K&#_1K|~ΰR R4O,Ж=N7-PvLs~pC8<6o̳{p~E)r:2Ȁ& "xb\1θQ2"W' + z9lzb=N }N.b<mzf9#9U+ D/܌|,Q`H^ReYm唐BU9T r J<8n N Ŧ4Q'bbxoQm ÈY 0%=\>1EN: Wl7I|˫*rewSsp#P5J~{c@QLDZKHQĈ8 dUT}.QU;@qc7c2v gﳷFG[7<~}w|z56ts,zc* IN"k*c@-xŃPhpD}}GueV'Rf.0V(*#< %3LS@4) 9+p#`h ItNΔDcq"YZ_zk1sn1fs6GnL޿^^؁GVlQHm`Mp?煮1Kh !C(1zNx)p'IY}kxt|=|oȃ w9^A^%Gf5A$Q0kJI9()<[?z*01rp3r|^ﴓ)W$D"E8іۘsw S<ż=rY/S >{[Q8d2˨/"E#m`WTFK%@<;p{KwS:"AQ4~Yr)8"J0Hx_Gց&($YrPc:#y^"sJ%{)y.:Cڛ~cs}re}fRG]s+CVly3j\k-*`-gk6]cUm^|z\onG.Q'#DNŪfبEޖAt`ľ}~jCZ7t'>Da7}HD$F-]\^~>O eO)s?NN.vwC?_ݛZx4eg>ѻoDc3ui3Ĕ zǫy/_u}4zNqO']x>4O]_e͟O+-Txt+tۯuo:evޭ]?gZ{31hoH-2-s?ϖųgyW4 xr~єMROț>]GF z|uIJcXŠkYqZ짿.m{XZ󧝿|ݞ>|q6K`S|~Tn䕺O3*({Sm-ﶒ̡:~8M476\';^>pQif*<x`ڲϝTņUq[_] syP^4ݤ-gX'K|q);۾6ߛ(1=L;߶\ ";=>\T('z pDbIנ)rp0F@2ڪd1LyYG eO*Z@$!14J Qsq4 Dfr|"{O^){ h]^zw( {U[{4J5A03ZQ쫦ymQ-AJi*RU:X]q JhePs 7 ڣT邻iGc,␨I \gH]&P$wщR:ƊR\,ßki@7hr*ea HFbP<̹ͺ~UǵRkm>$W,)M,Çy}&f;iL`ѷu3{⫹8Y}ܘΈeIEJl !5xƒ'bS'Y¦ēN#>ϦoyMܯԾϟz7kANsnV5Q¸D-S9S:})Y= _2g|ك>xԒ8Pk<]5⾛<=2q|l3كiUj/"ŋp|'|CZ;,QY)?xyEERB;BR #@%emS&yVĔ 'mBd^6О;Ub<SFLP>hsTA(<( *'dbhQ?$`8SPȑjey$a:І$ :rkxa?+f-߁u)R[:j9IK&z r'3k)R T0/qhp.O_ߡ@} O>DTO 'qYL:Xa2*W[jZ~9!NF$V9z/ :\r@|!rQN(ʜ-'Vzx70mtKn:㝑hq\vcLQ$|rr3"1k/*/":,|]-CEn|1jK;ᅴVkn dX\R ScE*?;}C#N-]~BxHUZ/_ %$ qOO&Sz`rM2΢r?ML` FlR4VvUP)ޣ+;,&`6x9Di3b4i^%T$O8V]1LL <18A :gDZYíN n1snYp+dhG{Me]`1SPpCB DHbDgxkW.TNHYGa;XFe?K?\W(: 0.  \zgG-f-9g%P9,^ɾ'l:%ۮ}rYgo qNJ¢mVEvX!m]G 04GjA3F=wZ{/VX41P"ҐBH--:$NBOy(d 6 QJaTid,fnd,Uaa/Ba,T\ug0 !6$BHZ<R.R(R" G~iS2y{jwqzݲhҮ;>Vyߜ'ګ㣵7aL_7gh-!@*:"YOT7?T Y}-՛ ؅=? dlIGgչBB-㩮y'U{T??_ȼ*v?,ōU`-5cFmَGMs:KK u?ޏ$Oܝ}|1uQn= w]$H`uqv sUy>q}pF9'k*&Rb 9gU>y8pWM[K #Z}&Ӵ{ۻnT/:^L58!ٕK>fT*^[ (4'B0ai̹En >͟Om&kN}|'$ћ ]H+1zݔhË|2qSj,ţׄsEr c ksY0CSԹlf- K@6+ sS ws j\i&Pm֏q@i`)A4vqYy8QYb1XBqc#3ujqnM ~0MPi[>v8 ~b rWXbI+c}">5 72<Ŷx5Ap k뽝|'Oa0unx٦#c=~jQ벾apGŮrq˻<^C~+ _c3y,ҌDʼncEzllgnRuēTw-o p\:WzԾC?2sR#C9SC ]^}MilLM",co؅ٽFL,*z˧[RY g/}g~7¤f|e;i4O[j9Danu mճڅqWϾÕ|{lw IOؾYǵ|e콬B32gr+Mm;\H3;= 0?2\ni3Ew%ym#_4竖Z!lavun{i6 W_~Ͷ8OW]7'G[=V_jޜF[ /6y ޸z{!$'O˳sG(=  \\6հcհRWrܦ}_ !C$WlP}xV]N]c 1 \ s \ k?vVW\' FmaL}3^:M@̲03Žs@ :s[ޟt6s~s1.<} ^$\\ssz-lň:҂^2Ƅ?Ɂm·NOS`}gv~nO/{}UxS*w'ޡ|-Q*@ri nCLtݼ niΜe'/'bV9d%Pձ[ \R\ތ=F݅DPbcѰO-<$τ=i?"y/iL>b͕lOT]Bv79&e[}qXVJFYH5BU!~燚}9eԘ= )fAnjէ-ةUMu; SMj/d7íkm7sZ4mY>ԑWod/Zj|WCdT۵dqg%j#I͘#bpT-|&S3[ ٵ\l5oN"Ŵ{)w=L ,Q쇷vՉW5rKlh寒԰X7 06Yʹ0WFJCG#S=^bI;;-oMh}G^8ʾ 7h4Qگɋve*W6Tei/<:fxyJƜԅ<6Aux}s9i*M]ֽ*LC*)*zttr=RNP ^aMWK1̉%vnE"NXRhQK 1?Nhkk!4emTi{E<#5/.S6kFu)Zzk,d)13ȸT'5.hY[ ubǖKڐ]ꨩ9a5#_f| F U)cXŀ%tv,*60:jBpv:68ZnD%إ ,.:  3SX\ GW e 8mhF=Ҽ>;1(4[5hZpZx* +5%1R`8^А$8pPX(\R}3sӴdLgB 4;MH&2zn vű 8KT\*͠j`o]\?T 2N[ @CX,`њ=4v!ץ;-!n3tk@A'EGܬfB?%Ac4đࠤ`g*A *LU r ֑8vr/ *A d*30?#.%ȿ:`+xexw"=@w/;O৒!Heݨ+7a^u"zZ\\`-hSm ,y&*Q8Pl46 ˲u>A Ұ64]j>6|p.i4 k kaڋVKSM#f]" ŘQEb{19-5}3LΰQgOՂG{^6 H} p"P 4 hxs:p4/Mh:F%S(كҍmzH @+uXS!yCs ~X_2~@4I"L*ȼ xOx +|&P@9r [@q;2q 3dAX?5 "|qH$6Z@"pߩicm1v?8U@@"QtY5gڡk@)@T2k{XKf %hD;sI X:tUx ElwXm4 @>{e]FeϢ<,3@ 25;c짍EpYAb}'aVBn|pj?bA ȉ+KQ5{-7`]c,Y[IC5 [4@47uJI[5 #T>i,eYn%#6?O4TfĐVu6ic=Fu}[LӞ uC$ePe z4quӮ#'åE:8$vߝLɊ4^[kCb\k l/ <RVO ] LhYiFJa@!(!A/[t+9P/Oڪ"A,WRS;d*H`c1t3,0@=5*T߼Yoah*'vV$Hjc$͂57))A [ 0p Ch#Yrb$!0t:yUX \1ٖ*hZ ^7f "7fjcRfz`=ɗ@0S 2KaZ ~\sN[1!;zv5d疤1U!5^ vgF>8Zhic X9 X6'=`ec^lO38%0hRC%q"ͪ=80f^CP7Ϧra`U. q/ c60`js) $C8MC$ϝp$D LB9mu2fWy'58Dg _`j?z˵f'Wvki!A͚ZKIBӷ~kֺ!3| ۭAج{qӻ+-f l aU4XS"g U\gL% pV1K%X ĤJhZ*JhZ*JhZ*JhZ*JhZ*JhZ*JhZ*JhZ*JhZ*JhZ*JhZ*JbJ+FiKYݏwO(ޭJ]oN*>`ݼvI$>+q3o}l{+;&Z[Ze8`%z'n'U霅k?13T@(KgxtX\ 7g/7l6ݾ:MG5~q^{IGpQ U;V `6n޼ G4s`?#Y+~]a:K H,}J\S$!}dW=3C45H ز8(sViwz0fTB!HHEq .`c> SeJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ^eJ3qXi 0WCMq<jHSӒ:SӞcr.Ld&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pfe&Pf=_&`XO @7;&N'IU!dGJLe&sd%ېLLLLLLLLLLLLLLLLLLLLLLLz>Luz\t~$Wʽ>,;7._~7& )`9F !.Aǟ"qEvs.vĕT+߮nWTڮ@`O%T$-Ǿ]%)"lWLi mW 0]%q9:*I+ɱoWIJXUcIْ[m\G%57hثk]4 Iϕg:\Ik 0s&6;i/Xn`(Ѣ"QM0&!OZVW/I[n 8 4ޟfE ;_@ibw"KvH =zik^Au5Eim!9 k43~*l%Fc^Q9XW$; #,|ip|껱*Wiqp%]}lTq`_p {e'`귡]t0Ρ`y[/] =햫ŋ z6Z=z\7iVmF&]7郟 @nr ˦h=swόQ[0_ uLY)`MԤt/l O\ vA 2a{PF+*058t{*]Y~X(@,d k,t^p2!} puhE /=,ֲ`ǒ@W^1}D^{!aLT2F&REɕ[4 l`#1[V=MŹ6ۻtZBh E߽" |gd%6igËfh}=,(ہ_=P?m (݉f`6<{J::$-~8==ǻ'kj7>n̪\E]aYE,~PS?li4m0Sݶ Z{87˫ii_)|oG[֞E/-]<[ȝCX$]12 T;mЃ.&v ]UE;m7ެ 7 xhf޵/]xczأ7wχpeQ3!=WgS]Ql|dռkjΚ?{uwlsNcZʯGv] R?5>U0MαP瘈sUoxv|gpݜ vvEḛ%Nؽ4ޭ0bi.F-mQWֶ݄  A+NU`C %vY Ə*u|jxRɲD֘A2R*%EIi)#2H^lU)]$XDժ)-Znώh:k׫LI+H/ӱgߛuӣq Jf μȋ]k(Kyk8%ȓ(ɍU*TbGc=u+(XM(WaeT^᳦?20BXEtåJ1!:"SnJܐ( IXG> IH+|H]ek/PI]Ft)6/v!Tqrf(]%oB+0wȫ7Edޤ<( "}sk8 |J8 t:C;?o[{\aE61q:x&*0tO`#^$Cy[9[75̞T6믾3~M<&(b_*'郎 $ "0"Swn2ryy6 єT[lB(qTa)\\ :.+2͢0vq11APY§TIKг/g5U;ыͨQA!+Jxw/^oAכFϑM?pV\Ɨ (U&/:+mjԯ'Z~|W=x5\7^gk¤d1-n {WuR% LIE{}mKжj2ƚ\Ͷjj$Zdy чڣҕys=?:\ٻMv5NY#պ0>KL0[#u؋ηqaZ;3-T(CD(T.7w\ot /_Mͫwys|חo{_&%jv1\JWҋΏӏ^ÛQVd_lguf;~hM+;)b[Pvyn `}U֪fUSZW6oo n|(a&aYE'uIw(5Vi'J)Gm,( 2i`(Impm.Ez lu]$A,(UM`)$$OaX!cfr8 8% ,#4B4E!V"ca1)^r=t O,ݠf-m;""Jiء|A cHY#,> gh#{ ӺVGֈF#bk51Mϛ=G~?Cć( #)SP(9("N3|d~#(ÂJ7 %aPDL&X`DXl~F,$R);㑅H>HԔ1тFQ4`pHn~&Ξ|1@b4~tLG8q}#z;mUa 3\,e{Lt>#i q Ps9CcGA@ge1yC^FR<, vh5Z`(3*h2:ڀAuD`p'E2%[g}8%xg'+fN]ȚiDk>Sa5w~Vd5F|iAtQ@/CB3P+q,0>T|ٞvu.vp}r}zsjEt$@tbfNIJ#`Q+҂%ݚ#E֛Boc0=[݌g/0G{5ѻ^۝Ӊ__};[$$8з ޵qc"Sļ@ش]lvӠ~Z/Ȗ+m }G#YKeڙ$S2<|\Lxraxr#7B'-YI`9*" "M>K,(%t:_bW10)8TPXR&#sqBrJa4 Y6p0E)W#GHa@9&g*KSw vPm,mls{SRMNO>sa 3:Q61Er2 *zT\$R.Nnt'G|{"{4aybGB|=bs]cs<=ձS_A_u?f_ѵm|\<X^x44 ~Kٍ=aY٦2Q³W)5J_G.NHh͇fu|1^1d|zuWpo{Rǣ]di~uv2~q { @>|G䯎C~!d,5[xYʼnKo^$Zv?=N8NWgsSկޭj -O&WG^iy!]{ە?ή]]zSXvя.~?b}wz>w%OĬK4G.87,.?^̉~~srlK7yWTrvӟ/o\΂oI#fNeY:.W7n)oL_>7zKM3ǜ{BCeKV[_%i]MsA 1g<,BgĮ;n3jNf;F[@]{)4lYmZޫ9;:6i?z,g&0k(F_nrSa_;u7lֶ 5H=C'\9m_x >ߣ2YK)XaQ ,pK5* c2P)xS {9Ug==rQpwXl9T*&(אm!\6rT.Z“{t`6^1jf88Kjsjæ^-n*cU( !#\+&~ _E蛷Mpk75xhզ䡑`I*Άp ֮o={ {>Hy Z "5-rԴlݜ BtӄNC^IJq]nA{yH\G93ĩɻW:xӂa22Wǵ/xobnӧ'(2۶2o?#Ϥm' U%G*)}*E9;W Y!+h8 K;U 7uz䳽˖#zuUTouߙ+CMDP7 F*1&K Bp9!@8A@JX;d5qv3ʂ+49h{q͍֗+AObCkaw.خKKzS)Yew.h ݗS^~Co;*⪣`TtJX\3 Zo(&c4ܢ&24.L5 Hz`d<:&B jQEkȲY"%AU̳L8i"#e()NZK !  (wUgO=kgfK-=.ׯg34! N';˴Alg)3rڤLrE/uhP*{?aw@Nυ9mTRiRj9@`䉇i4`-bWZjUZq>NZ)o]Do1*B}'4-VMcMֲ1G+`ZI&q*+Wjs W4,Umߝ(n2:g!BZz냋iOaGydsLoBVoGdbS|E|զȡʧ++҉p0Hv13f #YmŠX ,IRazX'}юj3P0|F?>BFovDM*ӫϥ}OZ"Q>=| x{7q\3*82 oQ1Ld,K`FqxD.HQ֔ <$Y!99Tm8{VIAM鏦K yZ&ã47ԧbh]&)%[.w)"tJ%nbv k !r>rn1–pmtRw˝DHlb /)֫vb4˒sб'=(9/#d+, QYLFr"QhAzcS%k>}Af76e3hHہ 9Wշo})Gm~-T ĬD`^6A ΄_Pνw֜ڷ$9%]%׷8 ykl 桓;kõ=f˩a)|sT޽Q+茲&ЎKނ< D2L(IYC*29i:Xa۔]cLIWŹV>HFj^Tj3P, XxE`W;oJ8~%6x/4~<1>y֗he4K!Ys\%-' cT줵u$Ȭ1c'dBhT0*h&"TN2p Ld7G 8;LҚٛafǡ Q Y&O_)%6seS,̂ @j:u]M6+rEJ%Zš MD )F2 A[~7Q/Xm~2"DZ7UIhB\Jר> Sf홴w>dLnNu1cƨ.KX%rB "KZ\Ebpj|UY)W:͒qpq 3|)_ Rxϧ55eq^XR,\\ V}C~6d]#ʭpE뤽qSb.^?* p~^90FYf/Sj In#=U;B9BB! U :hZ1rJ gQ1X"Icu`c#J|HCHH/Y'0 gW=H;>3WYWgOGi9NJ99+O5k>WnJڳ4g%wB~<1]W^h%>n$ AWC#8j]%/EWcJhq|= h:!P1&LR}_F҇uBoKz)z65x;kZɡM|%gS4anq-eC#Gt\B%T#Pւ#z9M+,)o ]%1trqtP ++At+Jp3hNW e;*cp5`۱ ϜZFΝV]J!JDW l]pZΝ%i^]IBTL`-f*U)t% E*ĢWHWpJu 3JpIcx&=tJ(jҕ&R&9 čW5Z}t? ]tEB^%9~ྨz]CW4tJyf3d"-]RśDWX#Jph ]%*dWIWhrxS#gv)bQDsq0Wr%A h7we+0?-]퇖(vJ~ftE+աMwe+be#Q-(7#~1_٫`LѦ|G*l?Ҙղ5m6#5r U=.}mԱuU$vOJ[j?6S[ӛ Mnye/.y7 \ RW>_/ч]Toy!dh?ߔp&\l0eYzy,/(}ͽkim,n^6|07nq!-\"bM.c={h-Avouֵ·2P7UmV40!b5 ][3n1F: lp6HF ;3$Awlo4x:0JRga2a?@쪐o}sɹSa9 B#*}WH k@"1 +ɢ h; }N2&S b`+mxiuzSakBu0RݙoyfR{2b4(~VbHbjkcIMG޻>?b'qENt$9c* #B Ei+?I~V~(x*,Ǘ Fp)<K4u hM,0Q" `S,U~F,)Ljd<)V2""&ZH0<)c"GGw@ 'Iza!v[WCwr>7ioqx9潴U_Jny@&q$1IA@ge1yC^F;y-Xhm 4wFm:b0XJB@#yY~<qz[ >EQn+Z4%>|4ϲXIOě#xNx#9QL;{J#X]Z=G~n#9 Yb#/G (Z#TJkǥ4Qu ংB*C zX1c2b=6D!5[!-G֝Gm?h"<:E#E'aXz[*մ‚[`T4ڛc|nXEbb0_"}tþh|{}/Boy4à-r䬘\2I/ <rZpZ.LWg~epQ09(|n 4b&/? z5I(֗c6KČ2[ۥ@Vţ&* s>irct"*} =3V j;DK鯀$]A$E oH\byH2TUjni f;Vr1,증8F ovp:Uq. h8܎GM4Xfl&)T3}|Y  ^;d4\%^}CLrCjIY;f]t~og[#ӄ.l:EAwᯋcۼZR琺,JWg9,tf{\bзkR2[n9?|9Rnٜ̻<F{3$y$b}ƥJGl}&%#qs#UPjq6vHьԢ_]W{w3(E,g:/,܄eEXr=kTJesXmF كeLމZ"Y_6HJ {EU}$XDu@Q0!S&J;f2Rʃp-a8^jF9Pv48;.X.y I0 =bmw_E[g]2Nf2]E V/ybeɰ&H"OB42(ɍUjÕs0kr7i׽a{1k0yH h(fEi%1#H&_c폥A~P/ϡ; b ڠ4-f1K%iZDtDG pYCZ?[ki$$L؈RC˴?+x% ;˸^z,qJ`a|)¶HHLR9 +-@ `R srq@x( b@sk8 \?J8y tрR|2aE=3;|b \fa H0F7)6!I&;c?ָ/)Q ظ?{(O՟7C ;e̥W  ˊN[lB9 2Q}koB&+ iy-Gl'n)d^KPo+۽^/r*\Q)râгh Ɗ !aJJevLY<2LcuΦwuq FBY\a?zU iy߻.+woB71MW^C6F> G%G)h8]Ttzg?V86o]r[7ڧI>^|>> ௘HlTEXʔ!"d*&~2kmhEȗ9( ۦA|( cn~f)Re˖M',P;&rg<?<_~˟^?={n2w?=U_8s]͋1 W?0Nbu8:GO+üf=: ^=?XDí͇BslhQ\&V%{ ϕd<߫E;6!1̀@l@kry6̒0gĵ$jw?4ic @&#uF.HR ji 1祶E@8ƥ. %ً~qהd3.< >O~O8;z˓3[Nߵʯk(6'Ȇԝ=犪Ah 7,=s;OY|T^霴sSw'u8[",3m&;NJQZ@UN['+ $⁆Oy I"@f@IZ'FD`';I0<ʜAK 8}2ssWm($?g4ݎ!o<AG$`,2@e@8FW /oЮtGɬ+bko$w3pe>0wO9g[=P nWBR JjeTu(FT.j[9 ӞVdpWdpdpgd ĦTAԉ Bx$3N CY'jzPӥNsA߳8WsP*KTDh1erbq6յF>{b#rq=#ɸyH(00`(\= ,`]RJSHLe\C\p^s&d) hT§3A }J, 8 Q*5Co0nӭ_=w u>gLJ6awJCkZ݂CxfjC=[#ⷋշ~^rYIczpq0g|?ú4iFтXv gA^]ÏχkbYvj\54~·囇 K^μme,|ws_LE]--gK'P5Hc:Nёj<+-@ L>JZXZE OL3sSQTk'EWEp0:z.|ȔR u:%* *7F+BnθoqƯ_\χ?k&5bՙ]AzͰ- V$'~zg)1vMߑeh1QY+\T@ŕ#1BwJ%;G-nZ9.C-`#4RZ$K9^ 258¸ʢ= NDBFB&ɆhUޞw.%ț%´15dEM)ǣYr4zqXSk9*s^¦wksGj'Q̱80Hr2|@dJ&KtS]_[XcCBLhu4 !C(.1zxB DN$OfBoz֚|k=7Rv["ydVS !> s+8M(ID-O˓1sc-ǃzCx4tb'K"D"Ɍ8іۘ uyS"zZMG UB] j[ͅ,JM 7n jB(ԁ eI?c.?DR@II`*a+EdIblVT¬ܮ>U}3ϖ' i?OCLx4Sl蠭 28g=n6MY<8űSczq>6 \nzF'D 6hTxBQ`)=O(.Ff<:8=???݋Fjw0V'#npxmGE;.u!~!d̝??7hwOI{7f{DM뗻\_o'؏GSxW^أ??翾,] .OƓT_}/;wGͦK[+|:8Uި)o*^s>GWLXS]M*&Bٔs&WI*R;x|%5Y{h \Rer'h=Bb61eݿd`?lYƉ]NDƩpz=϶^O p4~jŞ{p"a冇mi5Դzz/J6\q3l6SLDO[d'}dO.yPhXslf)d(#c@75#׮w._xqWy[ N1 !D.qǵҔGN9$HR1Vt},HH~ڱ#W ;0<ݮr<"1 PJ9q\yxE"V:7zE^>WbZ,67nTp충\_FV ۪_zk6c8Ju!s~=2(6hd~7V'Kl[)M4u(!+]~v[S֦kۭi.i)m?&r-XDC&XWd:k7E7"h(Չ[1Vd>FkՎFx--Xi#1H Cbl_oPm?+K/CΓ^k`ej.g̼2=U)WQCP\X$TOAW`!  BAT~|srXwۍd.Ir(d- :xR4& MGuaJ唗Z)}6NNy&~bS,T9!%S8W#lC2L(܏%[vozyx-V=h'BHR䈢Dr)&xm1I 'mȼĢ[Ls;Wiz-xʈdV;HU+)K"޳H;%-g$\g9R ʱ\r*3LGp(CN(Rȉ5gC=o@ڜK&jxv'\& 534&;Hg&Shj`^0P%US濾o v?esZB ',&,0B\P-HA-?~R'-)iAk湴Z}Ä&SQNqՔJ!bp`D'܀(b8:VlN( mLE[Vj2mNQ{/U8Fp]v+JGN#I&Dm""($V9_+WlTDak7PяCs#Og-DZNI6K iT+QlT;dacxVoG;nfG 6&]O^|.]{Rq&$&l)lrp< }`|XO9Ғ A*JB+TZp_ IpIp>s#?}+IZߞtE+\ѶnDt.+DWj.lJ_lq8iJF &%0DQkSM.:[>;>I`Dy/wV`vaB@ۻ0(Xկ)d(m^'q8.,TXWr2r$bm쟍oJ e.R%L&4<KM0%2ɓ{s"`Z61g\]J%bx6sEݖu>&)h!2LT$ C0$E"aI%F7x+LVrR1K]$T̄^' ‡8 Ң;C}$&zN4  Z`ZLԖh<.2h{yQ(PUP9Cvy+Ps7_nS4Twav7n,*"$)٢!3y,g1nQ$jg-y{JlgIT9rG:OZl묌wƚ75ܗ-TtA2B^\-閑^ ѵI9F{~B[’uk)K٤/oY(G&~baLt9qUrlW"fBUi4,;aT߹CƆ#DȌu ud+Q2)G2-H(A.؀MNܜcNF\KG7'zz S/ON'[}Ka307tBat B9){SOg`zW4x}]hG1|c-H?9(k04:A&~BD1ZV4N6S?7A k걅}5Wnn>ıP>u4Jې1J=5Eoij 3:Zeyo_ڥpn4dCKq0icLߜ((4TP:r56dQE'asr1U T衦r-NtLcW;ۯMZ¤'j<<$QlBtSهV9e$4IrjrQ(\G:֜M%@8RJpijD%{v{fGyp~q4㦰DFBpgHuR!LQMYq1-D 5p(ULD%jH2eH҆<6::'0P{E5^ݐ[oWX=Nɹdv 7_/6myM.W﫿=I{~RʗJw`V&{ ΓG28TJ& F)tm#',}d蚺nbh^eeȉQ!4\-9@hR}79[},cWYh'kE<7l{qrfEZ^.6_Tr.p[QE#9+Αfl(4'ɅqQC*%kר^1*ںd!Zx9T^ ~M99ؽyq~UB̡P7ؕ3q>yu Gƥ Ҟ=ĹNBm\Σa1օHg wOR6!b|^&`\#IAP.7ZFO\<ެc(<[Q2_#9=z"r%-d?~Z(SNԇ\95 Uu(ij_anRN*躮H ՠDȣw9F"r'B !=)l &dQF)ɌDgg&˜Ax!bTwv*2OɲhM5ݿ_m</1M2'bd"IGzH֤A8!*DtrnuU/ZYs푳ϟ9{8vq]#K6cU]2Ů']qGo]uG(W]u ֪UgNNuGXWX4+l\\J=k\Jo&\W UA"Vbpr)W W턫9-\I"RbprA+Vqr Lj+t/W$hW X.`)bvtnq(  Z+kt)bY\Sg*)qM;nr[K$V s\uS԰ppvzVE;_qBѝ)WĢZ^,։ fœw? o߿vv3_޾yIsSIY)7}?B)!zx%ăayqXYl8m?U}FqIJ@l~7aϴOr?w/Frnf+ogw6u[se{~zeD_ѩ2VX1A;%j=MN} ,כRRk m`m۠)W$[ Xn9QB'V(g9Q4z ]O"о"܄ U2 \\,&$Fq*[6;j<2YpE0\7 W W#ĕ`prU1ՃX%LqBJi  ֪+ Z+Rk\?41 DD9 ,W Z|"rW+ԛ3!齫NNnrO]jZwBNTz3+WfծS/PXX H-H:Xƈ+<%YI7S\g^Km% Sx2ckJͿsRM㫷;^Cw|Hg49d)WXZ\_6qatf4{i2  U4W2ܠKEq/^Ķj>&Dd{lwҬ-#hVhQW &{54ɇG r=:Zp:V|&N "%Nt(%cZ =cFN#9-zWHS X)f:3t\J W#((W,ؕXb1\WRMqeC ;_ H\Z5`Uj1jVW + ]Z;x\ʡUpu\9) W$X9(W,`j!+V)Մ OljgGm׵X->ǓU Wcĕhjg,W֊R W_ SoτN_`FȾkXCXPN~Q,7g?\UwbVW?*kPFVV>(r( ~CO2szX]7K:,VbA?oViB?Z!WVHvENd+i|3=MΫKz XL ._wls=VX^͚@)Ys^7y1X=UqR>LkVP-t1_lv >2سzum[gf{H3ڭd{7W+߬Կ/r>{F@HLK^v[glސY }.vǛVAA >y]nrKHrZ;7@Ɣ'my/y`ަ${7ݻ_ׄo9a(X9aiBPP*DF_-TKH?9gަVy;>ˣ&o|veǚsM5jw<=w vt%LoLuu#y$d냿bb$[-eÀ4A?\W\E1y2ڮۆMzfc'vk@K[ִ|s3].Mvhq;40oq eO8(,ga۸j\nK] i'9ph=ӭAuM:,yu}ֿ}~k ZJo#_s艟@.5J)zC~}\ϰR ;nA. Ъ:; wNwV׻!|>$׷m>ۏш1b&\sτwXøA։Ô!~aaw2@WTs {pV^/ɭܧN&[U2'dfm[m1t$vVc;H 0 XtE)mWh$*%N#9- \`)u1b J9t\J'\W 6 \`\\mK5'gnqeҨ  A\Z#]J)&\WVkK xW$bX2Cԓw5F\9%%M;=bX5= njDyWT:1++pK%J*/-W,+V8t\J?jx=f&kUpfd`70n$_]x5OҢ!r?w/F`L'M9iXmdlWhtlLA`2mZ J+! sXRV{ 7o+E1"\Zd#Q u\Z?HRi Wx%Kjg+גX XYwo9L} O vqMIuSNtJ?ûƕ#׿" 8YdE ؇EK}g,,9<;N0=-YھZ.ԭ݇d)^fڂs\{kW{_y@1'^q0%k~-͖}В{0g UdUhr v]}|c>{?[~yQ>o?/5L`~A2CCCs9|r5K W__?+s z'%l9Xs)Dײf 88S:?B*ee.|W-$C|e+$Wo? y|9ؑhQYLI;$%Ox5Lͧ能7\؊$[raݙruf۬^r)p1-?Q\l7vg?/4HxP[c4Iy|A6SH y͙!v-HFD+6X"FɕV1;Y4%Fdk H;&fW"Hmqpl5K%ɺS6 ]˘ ݳ4ƺ П> ! oM]k )6fak3sAEۊF/m {g r 8lk])(;O-`ߜqj`= dlS!CEБ"y5 K +Q KHP o@ek|#gd,nA Av#as(R uW4 e2a[̷~Q.: ȄE@i0#0jM<cn- t,,x$t0wLL\LRk s$J@4`=*@A6 d&fAG-V⍁D]`#)#-EUPiϲERl eZA!N  XE!. RS-M 9̼cIf~ΐ_.8Bj(=%! e 1P #Ϻ] ҰM6vPQ>6q%P4yYȠΌA|z$ \ܠE{׋qiIslpBrr|b̠ʩSa:'$̿3!`Ӏu&8g\>EG3Kp03o ֛ }e`\O^yͯ/^_xsqnI`^o]6sٸړ l|3=%w6xY:Z;5W;ȎQgBY-6#('g=<>i2#lS8pPaRD) H[N!F -<ENJdtw `%0d[SA = >hJyHXoYo~q >,`E^H"u)A:[H 9)Q*aX 0jȹ;βHtf1* ~9IA,k~xkSژbkcc2=ӉIpbJmށv R'FlLPf 1jG/Z]%OkWrG] Kyx{G#. kJ^k ($tf(AJ 7bdzl}!Z)AH@5>dN֞JOQ!,6:a$قn-M;Ea2X1Ә-Uef4  kȎhAM*.BER!./SJ,뤀k5c4F*ygAy-#<@ cvqy/n xӶrc1AqLȕyn% 7*\}Dqn L{Ez01C2PfH,o~ {G>7.jp rrtI泳oM^}rǧ_^|u77zϷ'G~quwz{}k$=in+pd;]]xy{r7SW-^ݺ5?룟)ڸif㷶@֚xl?HN,0w* , qZpc\S{'P2@©N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R'9X@mkr1Rz0p]hswJѣO !: N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@rƮ j@@h@@^@z&: N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N u耝@`'F'P({Jk tNzuH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': z/KrOslf{}vC@kfQp~|C슌Kv5ƥ ZKKn\(=qK_ =6fM> Xx5t5&Y ]-ٸt5QzY"]yE1=l[8Тw [p+ CW7k^~(E!ҕh5t5j֮&(NWtut)cVDW0zD[Dy{픮E kZjj Yd^]MA0tEOzzGыv0]m~_ -v(wmF+,}J.htEUj/{u{/bIԊ_%.M!T%ԃA7/ ]٪VJѕຨEWBKt] eLUWC`zTܷ Gx2w{ݔYmd3w5Y#C'N/F_GWK|7ϸr֟Z'Ɯ %Χ)wLู>=[Ԭh9NVBT_sҼ1W̯?|ˣYw3wi|q,^YChr\ܖ]}Hyw>L q4MfFa}ئ)XK:cg_~hOk:Z~8 a=K~gop7߽l%YYhjM48NI鼑Mti aNzWkv}Dr_4lR,lMj&4I8R5I8(S .LNhє %Ձ!frɾt&5\-ZJוPցA Jd١] .וPSu5@]ɹyQS2H dPpӢ+PCu]kJɪѕ>ڽt%x]13UWCU@Q8>ҕ>At%X)Xu5@]EĔ4*"gv=ju%$MoP3vT EWL t6Ү+رؐ7yo]ᆞ ѦSȢ ɐ+zl[&ߜq|>:mthJBqklYst"g yD.gq Yޥ]OrRhfcLM$6G?iqnVLI0&l\뵄 B] 6["]1FW VM#}ʣ5I8] nJZt16+tPu5@]tHQW^YdMʪAꊼQS2zt%hJhJ(K[ Wu$> tyC/Nؕz,]WBu8)d`4zƮWϛAu'B5b |ҥJ]E"-hѕb۪*T]1p|,j1OdΙUWuvzwlFNY &{~3C tT-LW.CWU/`rNIVsJ܎Jj1źDʯFS9_ۘ42xĹH>9 غ+?U˖aٟ/>^=oGުuA֦|tf5<͝sœ g{wzQtHV-Gؿ\GJNo/H ~}8;[WW~O f1,WwXݷ_Kd:C/JS;Oi 51M9I8yAm|]>L2h)U n.ߞqw~~{}ŏmid?LDգmTnYo?}e+si/x Ȟ5׼p1N*!Nʞ8Z^x[A}sƽG[Ș?N/Nwb}₯ 7QK28! _+nWmܧwIш'R3ĸ'=p#NBYuiFRٹ|Ǜۂ7 pq*ms1fO6FҖsŮ/Xͮ LԦ6~+PzI94hNH4ŋ-?ޢ|X Qu.|MK aڬ~ѐ ۢn2q-JK6ӌa [Ґ O~[C\ѐ솄 С [}J'khI2n214x6q2ި#~Q]|"ELFu72EJ[*rl{ LÙ QK'>otO()tosPtuX~ϯo8wsMFdK8IL2dvx7r+5)OoA˖o 'r/b{[5%^{KrB6j7QvwhI\BGfqlԄN뒖Ih K2B :%ACj?3 .S0%Wå+O8V2a?M wfwq/ew+z_>]mmMB 1W5ϦE5˴qFn|%K4TS?p|=KC+ۇ(/y6C\"6Z5)@h2$ g#s،;uޝ.>a-~7 }M7Km8)f=D@62mhh9əNN9EԒ -34خfh(QA8yǸ Jh:d!.j0 &Ji] -^WB 5AzA ^6J4dLJeJ&~CV79-bhLճXxl1ؿc߇,&WWY4*U ]acZ쫙츱ѕຨEWBVyz]qෞ8ua]M 2?P^j SJ!fm0DrmCx'pB꼛(iFj,4\fEwVh6,9JFEqB7-,}$np6Yt%Q7]ɞ&JוPde%E`2jtMZt%x] eDb`Iyjѕ5_; u!PG_f{ԓ .y-KוP&OJ(ܤ&bZ0J(mjd(SKpW3vŴ\bXIUWO+5L|3zA},Z:kuŔXtlvp;M=ϻʢ* [^BV®׷;W:\x<8%}UuJO<'i N&fР3&¦4Kcګźΰa[#h߫v֯[yY5K&՚5'i>Hc,lMj&4I)I0 \w.^y+h`dDYA8;n]ASnter dr %ZgrBIfr/`׸<`Jp#hCJ(mKt%W89BBY+$JѕӢ+kj&7LΙXžYwyjѕВ+]WBu@Eq۷ӕ>9j"EJpcҢ+JוP:Α"] 3v%A,ŏ] erUWԕlg5JI׫P2bu%O&Es<] m(~](cMGMcW Q+E5бP磫ccd08>97 f-yP1CWUo!Qo=ѕFТ+'t] UWCTyRqǕ&t+MJD]ɺ"Ԥ+zAM^+7Ϩi] W m(^WB*Xؐ%Y_*z^^G=ͻʣt2t[Z1+WtE~v* SבbbG%e^P[;j6Z*Ws,r 5@=a\}ܺ&D6ST1n4^K&'J䄒fr䀒Q5xbܧؓit]1'_u5@]9$E`jt%^8'oeu芁Փ n)і e}7H]SFJp4GO2dp=X qѪI)KUWU>*;g]pv%[jGҔ 2p=C킋j+P҇*S+F 5\Jh JוPfJR`Mhz[/:3q_2i}?[fRnzL-w@Jz訮:)|KlcÈAў˖seuINs”Ʃ ת ֙D[Æ "] C5ܞΤiuI&wUWԕɑ芁] RKWBmTe+L0(ҕ'=FJhKוP:{Spt%}'] m*S!*|C/Uo}w4td7'͒[N[\UCm4[,)+8;lrj6?'Vѫq~d˛1U=qrͯ_f{Q^Mgu3]fW]u{'c:DZ^1۫ sXFA^_^j \|"߭y;ߓM)uY-{m8?_\ʉnxiH 1:b\_-#N8aٯ*9b6Ess9q9|OoF߹;~by\߿l Y3 ޵5#뿢˙ڸ_R$gkf7S}lxIl?(Ydـ@)W}Aw2I$%ѱ2ZYqV mPg"@$#d O!Oē9UL(NSe[=GAei=gn?b<ױV1(6"SogTI!B nWU$[WCVn]W J;rS?)'%1/e+'%zJDm>O(vp@/*c`aTn^ʈ؇{.Q]ɏgrB%hanS[X>. 8)̚o $Ot:8f>^磩z$_!a'`jMTLT~^^ Muv?vI;L}KJVWƫX]I ,fcJcDAH IcSH* 2-E4Ev5PM-edqˤK)|õu{5* B IOPy`'cc0|dh T*y._57*E+NLꋑsdL}`OvkD ()%xWv!gDH)|Fm5W,_ ޕa-o'mU ZֆO7*%ͭo׫A1hښftQ Dz#Cb+u%.VG0P XDQg^܋9깷qq/*SkrQ1eAI[hG$JzlJjڶG"\vƜBGYޛ0]t|u^Oy#qaZqs'ج̑O z6CbmK( !!7]&!`%Q:*!h ɎY~e_ sƇPk)}رD piXtg+.Z?exI85C&ߴCdD^W 8x٬%UmV2L*TA+.;J`1g(K`Lt"2c}6cuK >ձ @@qv۬H sȱ&yԋ}6.BZIOY٠iOθ5XX-+Zgu*ofaH oE /&ĘeEZVxX-'dz+s.Uۜ1Nj@u[ŋs°'Y?QP3,fYF#$W(Re)F.zr.ڈ `7纎R|a}6*b pa,) HP(JF)cH L*R2 JOʔĄ%_FJMF՝F9b"9CN3uxۑ}P7hƬ PK\@h l)D (K0S i"Y\"N$k}FMlbUrYsL<=*mkƱKcfv*zAT㜎uk`m:w{Q𾦽l/DAWlq/#dםy940q,'2ԾmzX_"Onouȓ8 X 6ıfwLV?6ŘtĘX .3 EWB~k빷;$q,'=fu,/о6c&gb(6.&+0hB$y:h 2cH? ;sfD`g+9}yg/w%eEWzqIsow@w"+֥VO;]FByB 7vl{ؔ2P86e( {B - v4?\;q.Q7 ^/* gta}H&cS#w&3qlP" fu&쫊;cz'zA]=@+ . mw/۩F`O.ڬ~D{,}6c_>4n՝Jliq 13~-j ]9$8kuq' C/*N>DƐם}l_ǨNՖ|KÆVQJQ]Ww\_vP ¶NX"i`"E'y0>(4)* "XwYSH ڙB%Ij80^ QhL/8dFl~9W4zZĩFhn?>Y;-Z\=.h~yyTI~zvsNu<獚O}w&!EEk5 37C%jKގJ/*8^6z;v;ƶ& ZۃŏMϐd r*pp#Qu#هD_;ܸ!_ܾ=%@Go-9EK8)4+lz]I%PP:[FYK}x(~T&}5u(^z 3ks%iz&VJǃ9UH-/\[mN|"Ehd cYXi!-T3ZLf9escu?1u@MiiE;=0;@r,+L'l^;]EƤ 줧ýzisŲ/qG>C;c_Z*B@Օh<]nf߃7^}Voe_ه_1E-s.' oB#GVZ#/(h;CMsؐK rP+`.Fx,?4hAT 9`5şdE2(z)BYnWc %2ҥȮ,!kVt0*ɏ9p3M;Odz8[**PI40HzkQ!$Ϥ"ZL~ z"eS(5y"G,Xp^ná>& 0Mŭ#Mж7~M$۲jFCk?Y5E4rnn\LT0&MRg+SZnL4U\ІmjD-삼y&] Du)|hCs,ChF]+ɊԽ?G5sK ) +BY%aTi5,/'KАau)gUs|3 ɥPqiSxp q"hD{4 D5%8 T,"Q4ϴ8n͕“$Q,JEP̆b; G TiCD8T֒D~^r2"K8OHD>wO> =%p[Ys&h<-ZmWev1(qSiOMz%ڵ,W|X8^RF95v_A&G4. IUuC 0xC$A *'n4pWjapc DJe$B+N /c=k)'[.8mv>uk@ c;B?Rwse$Iʻy`@VVmnƻT#ԓFԁ.go%Y-Rɲ U^!Ymȃ8'O {춅5l##~z,1竒 #2ֳDZdse,yr&8LS `1S':QiʁqxfBpACL^<9URr,f`<]#qshN#dUČW|iM8Jd2 !*>sݓ/!&<ByJ3In- ҳR鹺 2W&2qNjhL@ {$Z} ^{kBB@mV%w\d6]RKjbT|}0V+m_fo'ۚLGoē|6'(ȣg4"iƣd0Q:E "ʐ|f ?/wy~TCZ7 ܉%,)e8 J %H0,KC7:E=:R* ^"8JqZDjJ$2JD&I}ҺG$Am"Q0AsT<8%HSKjQF $1+x .yǣ2dS6./>&ͷSD%Z|I;Q4wx; &:?$wҵk\C(~o+NhQ$3\FHA9K6’⤈Evic\"کţq<^.XQ9knб|Oc;^OB'Yn)0ޕ6cRx}ylc7@&g$-ٔIJ$ [I>߁L(`O(1$Ȑ&Q$>Q_(0L&*EUU^ݭfK^X4R,&m; 32!y ;;4{.cCz$xϛS I@(mLEC(!J(\1j#`kn P5Y|`e嫘cQ5)Lx "LK07qAܱ/dbRӗH_^Oh~W'nV1 0 Z01BHؕl9fEd0STv%?Yw9|;7Gr/@tہj".o Cdl+k&V1Ht%M6+]RKߴ; GpZzb>[VmJ. Do^w.ur<Eq;#EYb@|Z( FrB+=ui4d6WZ1o߲@N^!e"1q f @d7T}fb>iDycr`,D乌8y 7!BzWpݿ#"(CT[| jڸo1b_~$Dqmg e#O.Wӭj1Nk#lF|T&ON+pmyl9-P?1i"1lZ;KBrKE=uu@ ^;(߲7^韤S}vb-eOn/)!D6aՇ͒Ϩ`|h{޾7;l#%rWoW }'$ v%l2&ڸjrږθg / GeToRGWpZ<)^m{*`i &A&3.1~G>& rɸ%<xaPc/LS蔏[mB`+B)߲X,_8:RΤk{dd_3* y;GZ"H6XRVwNܚOk\iC Wz0^Ǖb5] rj%)馻m{|@0­ŧ|\`eT*\-^1IU㐦}\_2$`w׃`j-\%o)[Po&A!h̶`v/M dVwY$mH(% :|~( җ 0KRıB (YMP! '~?}R"'c<1lL/eG8>U"oqa"Y?َ(рQ AEKu9xr#J2_*E HKSl3͹LKXDdR =MjC ,sh 'A@c.%( Ø%B9 /lLNT==zK(!d0 c" !}ȷd0q)pj [$,4HXVw٩'qlUǛluJefV8󰢀DQFR|s bJ`9v)&]YwA{.83TA2Nm>cq:NB{ T{ _kLl6T"@0Q/@(r^1B< '<[Lf);D {SN.{0䬶owM Yȡ"!haĉFZ"ɘ'p -Hkc4CiRkh)TCs5W1o,2fY,4 ܏6r` d~B@[!cE ٔk j0|j(Ӎ¸!ڜ<<IWGsu`F(鎪d߬n?NF$!™4 ΞOAb/Z,ps\&!R!3BL! L"@UqCW_RkyWAT%L|DKUX+l# cp%ZI y&*FzU# N' s ERQaI@HV0B.TZ/ )!X6fƇ.w;e'ZPn[&Wnoi%5Ls3Uq|MPOW!ܤ ~Nfn) BT nQQU "$({eJ$W4 I:s(L׊9wV̓LFFƨwȭm˹Xd5M^Պ92o][a4IS6ŀpAnu^1G3+Y&U{TXruz1zOJ|ݚ=΃0H&^uSzUM2\/IvY2+̮fG} vz @ Cmb).h2W7W4W@˱ϗ0=ɗ=ՠhR;$EHZR,Y8:QB9~w-_Y_ DQؿ~Sk__EkH9E.4ice|:e 1#!Z h NgQ] ı1 EϻC_Zck"#tĎ.f\SնaFtm[cLjA]x m(+# ((Z>8bn'dɶ!&B &ylc!POzyz+$^mC$ˏƯt| d~S}m `F`O` /lв֓t**/"`=4M?<hk)~CIڈKaV"fX.&dFA6ܨIn)w͑.x͒MN'6A`!FBx+ #pM2f͟?jSJ"`\ى9hoFڜ+j$E_|YPɸ8Z@nBi ; H4)8b5L#{"r2K`Y;AD6r`VA4/A_9_Τ\ C:6X6[Ǒ( H,c2$7df))rVNBʄ30ەթPm}˞A:vC\`^XNѣ3ܩR{+fxLj6\`3Ky%O/J PsY-]bܿl|hA1G"үX03aTtX-X}l}3wX񑆱UĴ@2/f¥yڻ~~͘C!B $>6.ӕlNyQI0e (UXi!<ۡkOeSLELD)xk%zNNvb!)`"TP(:[07\:jL97VL/Bejzr8=#_x!B3v(;E61( h\nOTSv}blZRbjd%e3jv*5hp iK V@? OyҔW1֩2!2z`%,R(ƾnq'wYGmcAb'oر;n!dZJOMklj =(ц}1w6V9&c_Qa޼(c3m/~J2vξ F:d6=HF˱-dT:n0|Id# r IS&[C4Alcė=G)}1vfՙIæYc3T-Ixl(N1ٙKJ|qZ"Ym#Z\/X|&m4Psc-v:֙cCSw^7dcƲt>3ys]zs >P$Z *SBb-e~CЦX$)s<~Ke-\zSHl}LkU'`T1zSAMV0c0JVxD=b)n ! jUa7{$FUۘ7n@Ƿ}-kLg*" QD9n8_j,eHj:ƒ^ 8 n5ٻF4\l008fI.M>j[cYbcIՒ%ZW~Y4X$EN!*[snƾ}vXvpO$4;%zC' )T2⤥ꭩo?A;|e69|%/':!gފ/~.|oPZj:ʎ5qPWؼmf`a]ӽ ky HB;{vڬhǼ=-QMX #/ `|b6^u'=`Qm:gz&|߲SK3oT!ڗ:Dn\}p6g. WkҘn0O#1UqN"_aYM3Xgt<.,$y~E9,FEiÙ=U5EYSah+d#k0mԸ٭lE߿5z3N'aUm?eIs%j8l ȀY/B<W~/D5F @D3AU_xY҄ciZ/zkQH`D;jNB<׌ i !eP6z_|2utǜ_^O)1Bm9MSC5Ӱv6:xL-.2/~m0Pc$nhᑯ؃ۥˍxlQX_[ZnJB$Lwۑl'&0#7I?|.l.q,S՞fFκZeAW īG-]jـc&$=p<0\ͽM|4-(B.O(m+J$jǜpߦTzT* Ӌ\قhI[w߆!\ `m.61Yǯf "Zڏ]Kx7fWX\7x'BP{`UnYހg FOstA77nT:P^8o FZ}o%p,v8>oJ2#F)ܰʛ2.7^/]T\ UH^&Jqs! 1<3d~f%QaVOwp#2p0%Pc()'8ˣbNj,h\&`<;8Eϟ~:(Iqz*{*aൖS9 8%'#plu8z!R~l}x¿+UkL<7x`R E &wȲ-"3 m:H6ruy{_/'C0In]T]i[nPÂuv3\Ίw@,R{ xBRÊF]e:Y$,xKBѮN76S!_)Uץ>ב_vt BNNXwwVZ62(M@*>Gj5P6|6IbE7lv6jG!ڳIpiǬwp)8ߴSm_Ֆ2t g\t}Xe|p> kLYg758V=-@j,Ht^O_$F$50~yj}WիmvL~:zΛ('QSp7[- #N„ $\Oי $aWn]#uJ=m0zӯ6ŸM9I>;&VyH0!p&`in=h _KZc4*riM`ZK1PJ+DJ D1xl4 6+2"WBQZA(г]ڶSiՒ9Q(K)*L"r9L+nJ0rVq:;Cw#@ۈDs@.tM9 6>N ; f%O2%`_Rٱޞ=d"\")J2(,I)cpA$].+qXaj#nI, V2*ݨgRbAL?+ERQyz"i2FkJ7J!s=xAa=m0[odf.9RO7#h?v [-@h0%c_SYÍrY@2ލ(444ƫ,KO!|e$.V]&X[.?TN2u<0)ljLy&sed9lH[hlDw%) xOYg@L _!z7(/A֘lאL*q VIzZw5."yVe/ZNMS蚰+:x2p3?.B}r'|_~|wuwaGo|GXݯ{C]/å/5aB׌u5d5R(>HoE|R'p!+~;֩w4o`T<8 !E}<8Q>xe>kM i}:ӴFrEVspym8Lbir|A19]ثI\:R2 I+FJ ש҄yRӘ Ҫy ?f/w.uL8ձq×F9t,TjP~Vv& v^JLRc 1reθ3Г TF S& #")%† I%_;4DOhC!Y(> gZȽrC4}|(eʻ4TYɉaL$˩PN#i`g4!fG< o (G~,(j|I<(&vj81#wmvW ()1F;4 Ju&  DHŭLfODI8xHOSZmCbsbgkғuFmB׵qc=d\Ӟҕɋ=Y|xq׆u;#V2DoĶB օZ[lx"rxj52r3g8UV̎n>=&W *WNdJ8"FcWYnrBb,`=&֏_˖7-4^~:bG1d{V#=$Gd4 _aiʔ%D#C\3&eX +b)5{J449:Vo3ưji&N}|-4)bx\̙}9]AQ~ 8q]NAj˷΍srgD'ָBu$1)h_A[D-4ӴVu޶*ޏ1%:#MAL K]~L+M;l6wQfѱJ8$}Y;UZlj懎`Êu*G b^MP:;GCql;^Ǘ6LiS)&xC0om7_ ftP9Ϥ w䮯`bHnfNm^e`z8/ h:Ȩ@">"DD5OV}XSeCG!p3ٺccK(QxV9\F@ϭ(ChKϲpx G塉 N۫iq ]qOn a[iKIfw*lJBaz7}BbOO)#: \IW32h;Q /go8waЇ~Џp:-jPw7L(aK !YG %?1G{z*}v3?=-_x?oxT[~ч\삯 GW)su e޵xgõNl) ]&ozP$ѬdJy2OY<6hLkժ7/T-k 9-lѢK-h<%(;N8y;l(@ZeXqq23raRs%x e#q6 tNbh7+Ǘ.kݑ0vwJDx&gZ a ؽRUͱDPو g5KzaQz7$SFlivv<i@c#bC&hl" 'q4_y~*i[_4++o>(uN5UlY1WBj`gBŴ}Z>^?8Azt]{~@V'qxjcd %(5,!<8özi#z\5zEe&XlU'AxoIW!CSج_%S)AI,CP2dY!VK{{r|}6kQ-Y.>h5yat+6`!q_{+hxRq&- ѲpDyGUwBl-SFTԈ~*~2Xdn٘:j(nY2Yͱ=޷dma N`\k6~Z{Z j<-y5ɃJcɄCq{{HeVvSԼ*6s͑5zrӠ۲ts9Vyj60>zK K+f3;_i3Fܤ }ϼ`mAeOUX C+3RRՇj1pC</t^lsTIZPh`<WZ!D)ThF,S_ d?^ύVP#+Du]'ϛgcW5mHi=EU IRI{hx#/QEjTH`M}_ރ6n.~#,QF~J7_/M*|:w[#tV~y#r5MnBO1۞qP fi+˿Fd/: zy8/#||V}Hc^UPg"(add-[7wXرe+(ݶU]yj/Y؊/mӔ0&)⚥e 31{K!%+&L8#x ܺT5^ҚF^p~CK0wmSbxS{un/f:.f oW](E\j6hqqrNu~X:vLHn-g_"ye*̄NFp@ʤW)iMd_KH9o&#鵆ƛJ~X4}%^4[-pFEiNN"iRݬc}kTwg+g."ߤcR}`~ ?Wb& `ۛAkL ]V̤}kNCګ_Wa>ZNLgAvy.|r^~/-e o\Yڪm:=R^u^VOa]#0Y䑏AarfKZ#`T1ʎ?˨fet*^J$ߢh׻eyqWYtfZiGIGS؈詗SKBh4FcR,΁[|h|1*04Ril Ud(#V1,6AaЍe4X(ǥ5y854j!eq$xOvS4lv}/?pyj߹ -wGe}| )!$p||>/l.= ,.k2+ϡ>Qbvi(6l {roa2>Ð5E8ж-YSdR!8l믐JΙ9>e ío lBt+f`!DߣP\bt/ހۥ=Q1LL9dlkFF(Nbٗ*\I %_QØ|4Д*y\2YOYnD½F54zDxa,W2:kNj[Cs}W=pvpށ(kKᄀ/GǎUlFŵ z_D+ytC.HJ2p X[ν_;eu9Xb59]=gg緹xc~q*+I (m;8:uҳ_C A P0lʽ,X0YA|$T(aӆic0Lnn'2yΠskyL ^6h ó:`+ F/#[[֠Z+]Y\̃E' ^N$X)E1*>^NRYo/(gDJm`2{0a M`Cnpٴ?rv5|BV֕#kx'8m?*!8c B,t f˯\U!Usn{ЉPH6C6C_gA1uXoɢPmЊ%L@,jgn !֧A Ϫ/*b\m!òȕ:aZaNBxׇqoS~nP,T6*>ԁ V&y06\ i; e¶ɢ_L(1? [ sltVf'2ڻ^54Qe;$0:o%dEzwS>ux ϤL8w!6XOnʪ,! Reъ8L3'2-zX)8dFfC)m=Wub?GJL' J˽^EMl 4kRQ,p ű -AM1i83'ИUƴNdqaW@fۡ"a q yp"oF+:9L^i-֕Vl+P"?֧St/ҞDN??.Ơ­EX2@-(MPm%aBTH>#繤KY$k&0e.)c:%i{qq<](!rΠxӾNxq]D| 6lI 4̿,JE} z;ZGloϾܒ<l^9.orO7\$49:gGnwq9tEK?% l>BZ٫Iߏ5IyQzgdYjC~aa-57W79/o]Zu.Ct5:MmvΜ5t2!QC8FHdͼX(S"ϾsiLccZ᠅clTm;P혴8&-3om =PZy3R@dYf5o?~wdE9?:;Y[n]]տɑ^V!.Au:v19ENEcM迎Z6uOW\|Aw9H<諘Tym0bFmLG^XT%,Ә5NbTg[ = O(YPZSue$a px}^4 ~hFět#Ӆ @l۟\8-$bӁ)>ui¨VRub9%'ʥ{MX-djxqĈ!XM[BZaEʞɁ"Jm0"c\?%]ޑxXrbe _5=8yutz3g=N)K6Js +*SP[99]UY݊{NGμX4`qjsvΞ%F ]uqVF`L(r%[Jl 5 h-E9$a!iVO^q8@YEŢیA Sf4$l?jәD}Ӹ`L[ȳ5-tm|hnGO^xQ\̵[iey!q{<:N?LN^hw [ /J8#oib霓%Mob-~ȸ]}!p^nU D0{+=M^xWo_s>C!7g?s=Ԉpnk:!uË^gu)RdS=_Br!A,z*ƴ}j->Rߔl+A)ih\U(eBXJLE#sĠ @"s39Wщ,j:f#Zq(x^P3 G͊b5$LIڣu]_>bE!V-끅CddS*/Iɵ;m\MX߳>\l׷;?. @!|\8[{̘%&qA,kEMמj/ ɧ"6 )A`\dfdmPL.I4!TZ o%HSdZ(kV= ,bEr*d!%*JtI4b RWIvo|swn&F֌iJMSžp\sJLTH\5O)fL ܁A6&o2+6~f4Wmc\)HJ%vV6թ+!#@UPTWNYU!e$ł@9I)r#F )rDbU[ኯ@ #+52QK@W'ʷog`%*èV0p4QAВq19!)hLB15qr:H>dJ5 ,{o˹Muea); Yы4.TH3Vk%a0ՄT6T V2P?MgEQb0`.,WO)`А iAY9jkh9(Y8Lri> 1чRnZ쇸?~յ&ZZq{eTz>nqNjy9+K,7?ra)8dM[L)큮Mؕ89ѿNzY(\X= և]\'bֵUͿ"ڀc?-Z= V c\#ci!3ΤB"jByTڶ]Sdfcή2fgW 1XihNm(4OQC!QZήsqUN˚Ѣj)RN F(`"[5ȏ&9 wvzA;Az׎.UHΩMUvx"g;Qk\(9T"+Xb`Aj_ca&vY/sB NozA1;c#@wD봛؛PeNM;3u6 y̪+;^XV*&ޠΏ c=- .pM 9;w:a;_v6 gRaNfX! }I tv0ixYPONmӶN42M-m k !VҶfjhn.4rȡ7]7FG|p.͑I.M1R$_|H kyW#- xxo*:VС#ifˇbi#Y&E z$aVHM!a-C$l$a ҥqkGvn>ydGDoڍnsdyv9e3"Į9ԶQ#:wgڡ #3='h'XIѪcOu,#yFhX 1jdCڳ^5s3Ghot( :4_hpVƭQǗ5̈P%B 1B#z`.k@-)C[>׈sdf(Cly'V(ipߺh#]p$ yZ-z7.2ƺ%ēxVd؍`kJ Cm!RP pN)&_rdrAbJ 歶F4>5?Ϙ,#$FY3~! ܵf+󯞈vrN.x&DԮ_Nj&}*@n4i=;P Bǰtޖ\Etzq.q֫+^pEw[Z mڊwOnvP_60{v'|g#W};iDF4AND-z o.'ӹhƫ#9 Nf?9;DJWJխIqΠrUu^F<o_y  PdW}҄MffO& =c ZzZ@ mas1$)Q1aBi{2'nyuk_bWh5bjl!<$,ø3 \e T{+ƈyEgKWNT3 J{@^jJlyŸF2~ -|2f;(a\L8WTr vÄΆ񶃔Bgz:_1䡩U>; zuWdgT$r8z8t$ę9Z (TZͰfO PS^vYjhԪu :iq;f efV1+ԐFw/~妶vFܚ{bG[}Vnpۭ>NKxZ)rifUjzGZS`'nᔴw).bEͧԘKMŞ50EԖI*a)-ƆlVk\$h4,q[?ou%*//UO=HAf7{$'P_صcH"8'voa%xݞl(f}OL\6g8bX[N31ZQ{[cfuLDi{&|yHHϮ|ShRlpzt.VSd2W[t+HI)m9pd7'iP,,%̈́~ۇJz80}~wF|˾% LgS)Ng&wׂ^o8, XH:/uGۇ^@A&CV':XbǓȓd,I9W)!.ـ_-$hQ)l FRbcd[=! 64SQ<8K [Ⱐ% 20$zm`}&tlaK+MKLCg$SZc`y #6}/2$9fs#Kј4UG=f1BD_Z$j 9=gs3G𶬂`|SR"]Qt2 fŁ3-DEk򇍏^cNtTȤbg Qν5>xcwoрxålM㧺T{bV' =9Q*'QhWOQ3w_$?8ǐcX B?>Oso!RJw{yV}mu姄g_;,amxOD<]OͯwW?uz)gӵ_<˦r0Ul5Ik5\Rș}0>}F*Gӯ|)X sF4Wcg+} "vv`~[^m]m(E[ > q7xLŸS*tO9yYmf^Tnzkm泸NڊwǍ t#i?%bn֦ k1Rڪ.pǠģ-?_Ԝ!:p?s|6O0uQOoC5_4<1n\J!}v\QrktR4`=XZ]CR4kJ9P5NBRz/ d)AU弢KFy#;&jTɦUûlt+gWܢH:,-ӕ#4nf΄ RR eJ+tFs7[i/,[aԋ:r82V]ari$8iFRFKhH$2x& ɓ51#QPtzЊ`y^368;hdR =40xR:`ޞB#{> x;R#Q/ݳ$Y Eqֺun42?rmĈ|]Ĉbݶnb CnW=sʿT7蓬AV $QS?]|{f^|Hď Fq$x^T>B+Bᔰ5+ۘdR)I1Whxl)] dhӈQA;{6OY L|+{tT'>jHA?qׂ|j/7S_?9ۣ'XS f:tkYجi1I+0]6M69PhB%9!T IwK(-A' w TYWMm6Ө ڐcݥ Q%D{s6+9jb4JQdwQ1YvTL y @B| o ǢW|3XZR}ZlfAakAg%@;_tLБMIjs(ov{McscSn{p9ZT="7 pWF>ySn'(jW5GMc"a,uTPC|U& ՠN%J_(4A)EЎeM5nʶj+;0d: 0gCξ6s-AM#sl2zgj0Rɸ0R}I0͢EemJ!X") Q)m ł 5tC7#RؗV),(Cm;[g^S H{Pw%A`P9܂:YΧY2Q&*If PJd[%ZD3B0FL:r)mD1FM& aXXueh9` nßJW㊬/6.d_` P5z42*Ly[jvKΆfY2*K eLePo~ym/l7PƗGN:{SZфԸ0"XѡxgP fh4~t lE@F1S׷bu#9/oTgb+"/ZR.r6R Y|9`| oҝ4;gO_WMx7X^X]$uY.FJe 42U y r݄kA' >B\b7mSWchLVzo 9y'SzLBmĦ:F~XL4[,pM~m_OC?[fX^+LP{p3#W8Fm 2ko/كB ǍKs=#2kjvܶserƸPp}hlc~=Q|Fu0[9T/5G>sOQ0<kL SnG&t=z?c57p5N[?o?RbB(ae*c+:6۹Ub؍MVsZxG*蛁.׈051JeAx?t駨O2G/W)#a$gnh6+dW®^i.LĜ=ԞW0ԷtʬR56i3{wFtᤣtm7Gi#< ~zoϗ/.ws G)e\1*Z/ϳIg{4,AlQk(B#d6$YL-r4_gX"|T=+2fGV>7nABC-P{ 炜 j7on9eTmͽJX||ofbHYd @dׂܿg2',8Πo\8q+0(%c 8TA/@MOF}Tflƙ+y$G RI)g.\ @㨹a-` 儿J Zk W0$9k kc M/ m]mN 6pr}; 'Cj7^bmv Aċu25U>uB)Ä[|#G\mˡfw7 wÄ%{ 2|bIIe<==$OO_] >wV+AS엖BX;%}]=H U Sxw) x d1=[e J>J>.s'= J>.jq=hz<&=c7K$U_F_6qdGﮜ#DDTnsqPsV,>9G$]WSEdR_ \r/ V!4x#PNN%N~}W;AnNxmXokѰ{[4l.Z4EöMK竦*;10"(W>3vcsvmLT/ y$dc|`6!Oʞ3d]595'ًf\Lꂧ9lӬyCgv 4'S";#m9r4@x. %w0T鶜J[rBo5/ BO`g]!^R6Oɕ\^;vy!C\o'i:æcG%tIOԡ^T5͌r$,ꔣ>>bɠF}O)zSՃd,ܧt?ZIx?:èQ6slE T!0̑ƐkgAJ4av-r=*b 9m-D .31}~Qy#2 1>@؛\Co<˃-^dp+;+e]$6bیϮܬRs}?_uFtst 9β1%j0zpa쳀F!{Qו28 2D?,,;YgOPc^7"tDڛ=}d0h6Fz!?aj1F@,~4Ƭ}EBJtm wqjSYmPLx%4٧b;9$TR:pf5HrqJdۢM\ j#.!7sb47Ch3D1wEG[4%rP@=c6E Y3vTfl6xH:nk]u8JN69BӀBq.zﴞYsʣώ<v~'9T8dMe%0 ʼnaӾFaP"_w N^@ sD+j7"*#%Ӣz/cE53 }^ޢU7A\2k6Ԡܣ|0̍m2Ibq ~mgc $ew%?ozڞx0a,jۑIr3hoN f7%gPcE+C'H#:_rE"1 F9l˾f]fD;gcJR4:am#2pUPb81Xd4 !d0b[%#JCCH֛¶2@fm톜?jWw뿻eHSGo>v[ :5]9ɴ5s;:pYq,}Zʭji/E p:A!g~uK Ҍf}`;G:d(~dVtjY[Cϒ~ 7>Jvd$G͗)f+:S>u1X rD"43!( {v`P<Crsi=3#qQqhYjDPs:0WhTP< H9'J0-뱇ɩVD=NBՎiǸjsV;P jLJ;o ? RSTڱU;noV«!JjIu`Jj?bk [;{H_Bhi΍.`jO B sHK)6{DZE1Y$ru\lfs>p3ֿiF_&75IՓ| #-Y4=-F/l'F*'3`c6]>-Awc M:E? rwH~py <\ R'm̔agI]| ScG%@2xg8g ADFL"ıCEy)ٗ: `$]1Xz;Q-xX#FR&.-ָB[hR1c-)A]5^Nf64,~ѻ&7uC1=61}/?@sRI~u%=' 6~ݕb^edoiiZ8,x@9"Eʐx` ;qȅh;VEGIT約9 =93_j@FIhԡҚ 24QA EFZddý &]-Zd%FFvKp3G]9QB+o=O?wuNhb}ǜ<eQK@C=w˳Ju(QGKRDB4ٴ!qtf\kh1LG.dwLXuq8& HSnQT~~rS&Q(9?_kNJM^:|6qb|S78#\'c^8z7=qb0ץdKT0?Y8>wX?/9q9!]ȂtCʱ`"SLd` s m9"+96XCLUZ&'C@{VZܴMy7Gc)v.U&6}^=6]M>k{&[F|ͩI6psӀ 5hſ1Nn{3E1(#W!rq5GlDNA{An&Yx/x>7 ,mY{sZ8KkjX:~߂ٖ$!u49뗶fJ֮Yqs #&vx:|IV9vY뱔ڹ(g{1N3>hjI&"9D,QnI!jeR᙮FnЎg:l57Ea+Ղ6(h7߿y\;4Kv{eǞxѱ@\;tȇyh>05JKGbo@țm@Kt7zQ`3GC7&7b=P4vսid;p^Z 1i;pN ={ >0޻O{wZmu/ɛq%uߧ+oo"lb2nu}b 07]4yO`]ԭ,޼|]_ĉ{ǧOWaN4-<2҉I s6A#TsE5n55C'V"+ڸ|Ź: rMoAN]=:ػrN.y59VTaۡK!';ȉAnc yq&zSitE\زX*p ַ`;h ޼,; p ^:KPpP4oG~w!h`;7ln;l"nGs,9g˱V@{#"R4^5<c.N?Nhf\GX]exwsP]]㱚(Mo~co[_O?4Vd8o=56Ϳ>O\B@&wh,RNGV+L9Fq,YfwH$vW_Oԛf,] ot/oǯr=sd߼ W~SjU9)bDiTE_''*Q׶%Bp1A4uUoClIbw<5h"g'yt79i*djd F YbMeRG}Ǩ5F,KA͑,}bUKBV!kb[UU48.k(@27j+vV@o*G~\XR4^d="8 uY{o~&&V dR 6@-e="{V€#) * hJU|ͷ3-ysV7Wu&ghlõ4,p +$8uHrHQ Ab9qYMtҺ됤!ⴼ<ܾV%m?3nsI<ʹQndxQueTe[*'%!B f`/moYi/^=mě 4׹ IN4sgӳѴ̋qrOi<5{m>p1AFl7.zl:%I5jMˁ+0#x"j 4> XZ )MV(k3:5g0 l~C?9}̓ ufmqmbtyMk^V3=h^ EP;E‚oۜ:sO^zlHϜ~xc&s| h#TCfofU %لFگg,I19?i|U-,mUJ- ڦ R.h6IJJDA*]NVߘxܲJ,YwO՗'WFa.D!| DOaWZaaFiIA}'qC1xMcc-d3-N8}u{{Ǔq{y{F;p" ,?T ҥ<3oFj_sFj@޾xM Lulo`N:=.3{'o:YL^8#7z0D,˘6 2cj8W/m A)6 vF ˣ=G9͐CGڢW*΀@΀P##l]:yh:0eˣ`P;ȑ0g';v :uIau7(z%}+C<C=VV*yPkulO%Sya'u}!q8Δs@V' &9BanDE\zR5_< Ď34 ly;<taq`3li;'8#$yE*(^f 4%1u`u ހxuJ#7cIw˘c/X>vOг.@10~Rrz" 녕FUTR.4hQvnR@9RZ2oIw)Y> 6Y<"wxGaOY~@iQ'yر9v{Oȍ\IeFĄ3` GrgI6d?ᛯ}ūVtc(dx}Bc0% %o?n0b7:/ˤ_Vk+m ;8ޠݝad z)s3~QwTp~)aEZ Yr^%laooJC6%l: KmziGI` 2UuI"!#ڰ5qR],U>.ZA;[2Tb(HHкB ID-cHhB>"lS%cޥK}l"`|@dk%۝- ZA*۹g 6X_[g ~I %l-S0V Қ$cJYrS! 5.UXW/=!I@dYC}؝eﯸ}UXBJ[ZRtMRD xQĵ ](kSI(+G2A$ZK.$BѪY;+xuq,n L_$gD`Q2 :(\ʚV.R֖V/CS--vEHH U2-d!дՈ~IH,XB"CWTNoS`zv_pn 9'ALrȦ^)o'\ 78ىoQr1&RTZYwaQbi%g :sZ7}n.ᚭ ]w^fh=j[JW{%::+wNݜNGWh&y/2"b1ɇw!['z_ ۽nVw>O휭vf:C:3[́˶x|L2om2` N aO|}_D>sAE«*H/^ r0>tH"77.A!|C=VZ,1É'zE~_waS'v}!qEC%gaN$ 5:vLxӯ6W+~PAN~`Y2>*rS&sh(df4"I]Pmܗ7Nѧe꣜"yg cW87b@eY"`egGwvgzU!frJX bYmdXӺ2R,kYdk K(S%T$]%v_jGF8)4.$tA\qkqd m4UE(K,@T+Y)vkAkMLF9,Y@ŕzlߎz-"bKZ,e <{gbMu d;u_P\ yH=cMt`hPG܍sgz6T1֡0ƫ}!,(*W U}uUgGOM'E}2{Eyfyjwݛ_{-[N:2shY&#YY;5>U= ^RO^p6/.Fhe]jJ[&(JROQvy5Z5o87rg_6vNHaN{cs^Zg79}΃RxHNSݘ産IԎ#W]+FjY&F4܂?IWjlz69dVrI "C?FF<ůiXգ|)j>6ۯnVz-}ؗh.4[X-iZ]Y VJ9߬B/NP."hIX ^i"EjӀ_[>@hU~ڛ݇MUne۽[.| dŒro%䡆r(p9}'YwX9/_J z}{SN6:Ka_:e* rƊ1O_es#tj/N-tj sGkgalszv͋4BSIu[cuBZvQ KIե0 EoiK_I̕,)ɏ rN8` PE\Jyt M*=z'FSO 0ϭ_1(Rq֍<`moЉFЉK'!@B&#  7#HΌN^9/RAVA.[]o?$bqQ{Jsk&5nf7n!;^:G㙅xp#8ru("gw;GBX߽qz ̙VlusVM` ^{߄Ա V:yuH|.lv2̠+SQUN) M )4L7w'ƍ[ֻʜ}UoNjz#R,N@(רl1)(B0j>^DI&C4Ja Tm;i 71B y^'') w4"SFYݷgc5kڝ0guvTJnSu*⚩ջ%:1J(&M>;' *dmbV<vz4UT7>:=Tgi"TvPB*JX!43!Pw˃PB9S:d:++EbRCqSت`w *OWU9"<;Y6/_+"N5x^WDTSJL*1r%N]$Zx4p>,*wU/,wN{w=EUaA%JADM2B- 'yyQNl =gإL&U`30h;$K=HNrm/x5kyUCģ{xu"/ߥيw?5y'y]|~eV9T(dAS!bG-5ׇtM:{_o VmX.DzB"A$$+S5dMƋ!5zݔ EOzl l繰ݤ]ݍɿRO#0m.Ipكgx^,ï{^' =܃BߒGC6Yg>Q0qhS6=}zo FQPg8QmZA9Z-s5"3aYX|ǘ߷)yYՑ$WvĜkY;XDex 0\$ ' xq\ N$HT!zSc:2SKwX9\rb71"0ώ{*L z15EN0X*!TlƖꂗ+e[ږFJPz-w%{Wa>f' aB+މ7JvOăix0o+=N'jy0x*A{>e~Y91aMf5 з$7G%wn`N4uA"jUrѲ!+cզ 343ňqN0ErkؗHV.\3Oǹ7l~Zq!E~ܱf nm0 =#?@ZCmH? Kn6z#W?/,. QgONBǢl,_}!_0jfkOðY8Ȫz_a}y ^a "۶}~ RaOk3gľcp{݀oāO2}j{8>=nSy":9m-j!Rݧ>D:j'ˁw`wFlX[m,Gvm7AN^8sްCvqn!;'ԮS밼Imv-#TW`ΊT܆Z޺sHΎ72tvv$.Ηv,wkuzPe{"0^?JvO)J-%[ȮMFpGlwS{-Sy$ Qw6YyzKwȶ==c-wlcA#=z <# +9՚+f*93%=8hԞ|:Tc|>81F<0!}=L[Jw pc"C,3ŁRebu~[_^޿NՏT/WkO6߼;B+Y \1KV\M!uH9 KN- m$ҚӹJwSDNu#_xfVBnT,g3J\_`'XbSDF%bVq(r!)2v7Q¨Ƙf3ZFŷVuacɛʬ#ĵ mwm[pHIV]'34;='Q,YWVbHqC kkQb=wYbA~!>|\_x4;GOQgFmʛqwXVވ-}oDF=s!4RXH;n丧4/#o4Ǒ֖5b#M]ӄcl<$?<An/x\ebٱ1y^Y^ҩc7os[J~(og_ߞˍ\?^_ | W %&[,_oC]4omCKv,~#cA-ϳ xQDk4KɳMEɻF^iz8^l>3n׾"0k9 G>OΉK\;wd YGS}xNϾĕNnS*sbUɿ]qT1߉5;IL6ІnvC=An$I<9o="z {٢dQK񏧩Ҕ[U y}hH7Wcv.߄orA^hvⴎVѺ`h}Q2rvEڍ1A0rA|yo7qw^[FUD 'g^ 7G<17IVr;|{ߔg{Ԏ{/Nz -{Z{l?c`\'kLD{i GFfv; :ZviHȡ^3c$&᷁07@sn.%IY{ oz7pOros)%G.sQ)3GZo٢{y:uAh@Z9ٓ&=]-= ֣Az4҃'%-d[)| zU/:Ϳd7׋Co4c3_NoxCn檑ۭy݊lwEթ Swpz[{(8FO8(*9 (ŲYo6\gy L}x}rSE[ \"%!] t%_u) !Rrsizݱڎcx!@kr.&|COİ.mz1 wct6Ǭ{AKֹq mqf/w/k5=ݹ=;4gvS^taΝ$׀X/:TayPؘ|4j{:%> ek*H\l!Kh1sl } eL]/rDxik.2U^HA`EW KXS\^ 3H|cjǑ3Eדw7dwH;H˓4XTU?i@ (V3oh "3֎>;coˊE?[:ɏڞwsWA n"/3K̦W͂- -"k 2uȩ6Eoת!Y50El$]Ǖ&\E.]Z5VwbPCiZCAJI]u :ۮԿ`H1] ("orrsAY_MOxO4\e_*'}>!z@ Q&ھ9:~^5EYNV'OjRjS߱S$YJLЕQt]X^|;*u |_ZJJe}GgADqlGM3B7'^|dHy1Ъ7ߩbij:~y0.oXL {+oӛRyy<oDM}]9dH [ncBE'<c;aCv ɋU/BQ=ynR.Rl:K~GXǾyZ[}Au_ZSݗdgaf{_ esꊟgcO&n!=s+/y#3};S0auQ\c-nƀQIjj 1vʧp{sﵮ1jt1N~s ldZIAg}Y'%$ŽxvO ̤c`l 84!yӾ3e?d;N@ 13>#,_Kr.|!kˊx蒺t%Y}!f~>lVjᢳ`8=gF]cgAglH1ѳDV[>Eb讐 SVN⥱eǮ .Dn:;Ezjq_2  TJGA9N28aǙ?f8:JfuKr{I.Ur`sޒ0}' JۭJ^ZEU?-q:ĕ+ 1}^~pu1ytn>ÙKѾ7xsAOVݐy^0ub>;wHGH~290SfQ/$$eI ׿U2]v*6-y#ᨖqzaiIrjCb7^Hְprl-(nVQuF#.10ۤ,.([.8kKv7[&EQ0q)UV/W3,֧^$eVʠٵցo5)N)kzjNU8;&'Z;:2YrJDOq 3R\P=UE)9?5.5?4X 'oKUlq eVN|u豗'u/'.}7D9*PRC!y=rk3*d 9c\M+BmBQ.gm^Wi @P)BR;P~{ކX/? h%rbۼ9._u7oo,ř?SYv"9Ϊ!S`|rXRs<&fbף$erX6}JֽKWѯUZRb*)"u뫓sJOxt>ROꝇ |a6 1+]}%k 7tT.TY+/ Oz"W{hʬȥP|J$YsBNw=ZFZbp/|UxJEj"̭~*ZoE]޿G_]݊x#Fn^*2z{ܭ d>.(X +`| `-M ܖ H32{w١FsԌ포5.ܛ?r\[.VGh_VO<\5Z7yf٧}MwWc` 6`+:gS Fxv &T% *= *= *7 *#;PrNw,WctuYRxp`]hRl8xŞ|6)o h}֨t_R6⳪C\`T 5GE4(-yfd劽/KKE+㭨Kdz)j=໫*"k9RqFc3&Nq^]6`hyu#k#:|A/%:w`S>\]hC\e)WIۅ |r# ^~ȴ<~7/rJrO/ JL\7xlBtcq+VYu@nrsbZ[r '}gsw8g _<Ϊ+U+gY );V|chzvط{6BQ %t/E}O* ]rz&'ZO G3Y9JO{Ĕd_FPqaN d лBL*g2LȌ$/}ާ=ZjDj9ai1V}᫆ՐJjn\Q\-S΅l*θw*тew<>>91Ybl|eEAUuQURGjY+RUlX .J*5P;ka44s\7j)"D:$܎ħ:^U"kݮE]F}12 *L4FwJ.P}RQb2*O8ۘ`Gh=5S#3&7g`#@avz %7sr3?fT{i䵗f ce=͍H!|;}9s/s7C-1y5G`H-={wEQ`Z^,>>.Sn'qy<܅CJთdg߯G˦f3])bHnG$ X0sRA .9d ڍ b;댁Bk,Hv =jXZ()0~@L Dv|0)aW^=zD bӝ!L(͈Ihّ3l:cp])vLhȑGsR1~O:OӫH1n2eU3>}pȖvu>yMQS3dzџb*y'!vھ{#|K>DD)4?ԇwiyLu\My}zCfy?i>s\܊?oO`fCD:j: b7zr HdXZoЯGK/'ozqXs~~PWWthUy9OGd9-4rm9v9Mƪd_9l<!}lNU7(My8f~ՠ%8GQ;E\es}rZ 2bBɒp#R(h@VĀoAN-N2wq?U'U9,L l)NVO!bw9 7UL@cɊ`ulEGgW?Yӣ/]K) P/լ zMXm57avS.sQjuPBH#fGuKA}` NAT xc^as&V|*㒲Ba}bY*[qEo΁Qx1FS`@T6 JÂe+)(q#Y𒎵~ߚ KΡi5s`N7~ՀExo 䯩C{c|~YՀR۹ч]k8V_jޯv$A޵}2V/9aor9BH1%?qQƓaDg <96 娏0 ck]:ZTˮ=0̴p~0tҽ(8cX:0N#qfK%qVh2e#R97G@ -ECYFE IS>QY=( N7zO>pٙ kz.Dֳ?3^&8%|zйbP/ESTUqJNkן]/DO&^oCv#ělN!ִlM"JgqW!|@F3į:aJ!r`u(" U'GJ =wK(YY fKZ#bDkIO#%PpKg+xC%EHV`r'~x-ID*WD}ҟdy'Id\=R )\S$DtI滌EI-ӒhL0YLݤ 7e ,iie}ƟTmj=>Z 91P=q#TGZFJ #T?4DD9kF B`_z#(ǭno]@F9")/yXк"ZWv֧c릃٣ɢOj"$싆@sl w ء&;Q tX-زjAZ>62% 0t<H&G~c0]UCr;يqE+F=a8AS"ASS)E℔DFQv%my''݈s^TOwr}ON%lydz'ON%V&+{N1oEXE1>MlR#xh{lmk64EiEF{p^lgv17X!ܰf !LۑehE#dz|?HkmpKsڬ9mļ>(Rmʸ˴4-ȩk3=a9X3F0'RO"IXHD׋P0Rm9 a}\`}*ί4R̟,!O3Su7N w2Y?p<?n + Ҋ\#⒠6߆K+MZgtpP>9>g !*1 x<o7ΘX`z6иI"5ge2`.-EH&\=`o:Ug:dR!.Vg5}dF-d.{dPRX,8ւOE(Ւr# %>ei_FQs.1:p6O*54\H<]ogf'YF?w&c4nL$'mDdxR7{ٌ/b"BLωZ/.ȔMviM9XygʻuNr7X c3 ~2$X](;0/;jywy@HՍ{ٵ~2",cJa"Q#F E l"sٌH ZnUw 0+j‹}0 $*'ۈ,eHe@g;0cӹæg?ɥȌ ;m[i/gD֭tLSr%x[ey*a5Rm嚋VSR*l:eM3̄n:<dTLSXĔr"U |WaVvxaU;V9"V˰"+)(/D!HڝjW(j粟jW/?+N1+-%VkUXrϼ5Tioa8͆u#fҹµWueٷź{. b!- _,ۻE7nR(m5NRÚUF8u.+j2꟡z5oH ZJm;Qz4Sz]CrR K$s dk ۝{-!e&6x#zbmȮN9@\j T/ I*>ZI5bƕT P+[ۉ8$c`_%Ʀ`h`klMK~($h'@=Tg4UзH̋dKNM~MZhzM1f3 }DW_ڕa]8+ʁ1h؏8怍M_2k'eCM@ Mkװi) ZP΀74*lWX69{yJ]O>~Z"(bnںNLԊ]YѵC!UBdNn ֛:kH1?۳-{(v2bebu v@ 6lI[_tX 6!Ƨ|V?ӣoi~{xzx(J%͛yPw}PiCݵp'*{?~ޭ~:qMcL']ݙ{6-oIʸJ |ӑͨ7^aN`B+]\~5Knh᚜&9|HǹR%t;?LVۻ~ $A2Sg`&'3nm9M?\w;Y7՟93li$=:["^s"u~xObw#og~:]_Z:y\~/:?iDzM%\h}^ݝ8]Tdu"yQW;ʏ꒟fLpD u)3)&R'R 08IjJuP9-+Q}JQPT`tsqeXBl×i@%N|zj^*G:<{؎$=ę@m$4#HK)Õ*T"- 1F@Ty`4~~@4 '~9*Уkx-쎒ёY:yl6LJaRrϰkBQ2w 8XB8̰ s^IX+zq=f?';4ҍ5Ynq_tL*FxLz&/ Qg8}~.YٹM.nwF,W`StQM?:EuԴ*5s6M[M+]1V X/OyxԀ̮1j?J"B$ /^=e2-׷Dט$Q[q(ה+~(.<`kx,\{ tXFzV!㊝ e;WTWZț*>bBL'NV\IjOg4IS.LIKpT X28 @W^qlZ28@Ё`EITQziP 5}Xq-qԲDrަn#K@ũTffdƀ8y-N47~5LQGLTJ:8= \ !紈6 aoMVl{&=Rc0wZUЇYe|[m\L.tӱ i A>F;0ΌʗWZ`eIN8pϱ'D"Ic.^.4rkÝnpl.p;zRi#M5,-3';k`[멝/;檬[*1Kc[teA Pl-njnVP ^գC1WP%=JcZ^ wwLF>`m6N(+y^d$ Ɔ{#3 '`c$ 2ɦAW "Jq%FMu!5`S.q<M5vXrȦ|Y7 "QՖ^ 6ҐiAB$P&eFzG~q0*!(~zJAc`i|fFщX! Q;{?{G!hև2M}dJɡi4!(d?E%|)۵vdҿ*$"(g%tZ)uhR՘/:1*ER~{W(5\oa~;O˿|J۟ ׫сgiTlWNc4?/)wLqMJ fAI *qR $5VWyMiTi)H=RnJ+9*tDWm֖ 86 sZ=i-J:؇X'>QJH ,xt 93ο̨# @u/% V c:/0OZУyQ&ǻLl116\'ȷ1igGϲG%J g cgJL}{ UbDtH9KҀF'UBpAD<1X-W۠^8Dpf%%O5(@hXEc MqE S% -)T7"byА2jU8n h H7n$I_ Jy3/E~YC.wɲUew7U:$K^b13#3LȍHF"Qv|H#-ҞH",2dP 2ho"F`@Si8!AI`1r୏A "tG2嚈%HĚݏɈ kp4@/8/mV`CƑ@De`Ѻq9 yfY788Ӂ4OfЇJ²Rzhg‡ǏƇ>|:n5w>ݮ7|v#,ur47h]BPcˏoݺ7xUWw7U\6^I9sQHe/DI=4@7@|{V@՗96DsGb)Ka2 ܚN>GOjQyzq7_>ۍió4RW)Z~ sp G}Zc=F72bE(Jh*q98y%;H]TVBg>+Aqg^%ӂkSeWۻ.qo+n$6M0[6]N6?kwWhQ".֫%L~*_C'Zi-2Jw/pw}8s\"({a,uAYx|t fRՅoT{=3ojQS j@]W÷TWa H͗N*ԣ9M=brqޟ- ̏AVw_ Bȩ!]:-?qC|2~ڐRcWʘO&d)>Z!&ᲂ;ZC)[Ypd}LNE19h?zEG%>7Y$#Ti4ez%# ǟ3` fן5S c39;nvk zrTפWcCma;^b|G^M#@?F?nK=s~yߙGOK 'GccpySOOǧ1q a1щfw#S=ć12@|QBBMIHi=`1q{ 3zhTUKuԜr{e-UJpqll4.(4V[ǼVf%nt5Zh)`FƊI+=DѺH6F5TDmP.*%ZQdXFQQ1s&r)Kʘ-QA_^K 2G8މ5.FUb x/h $[)a0T猴0mw Y__]hDFD'peR.KrYR5sYr_QBe.NquPDf;޳Wa Pd>ʼn8J./jq-u?},T :/ϊlg$zkH{tz>i4`.6)Hh ,~\{0:vTpA Xc-La [O 1rHSEYdVs޹ {@~>Xmvȍ֏,kL]Łn=C7g犑#++4^Њ+Ŕ;\;ƕ֑B{AН`0W5aM.z+J?5$PaMkcxW*EU)bjF`XV!|,Wm/>xd !TJ l4]!-X8Œ`T2p0EZcMs H$Q )&>2۝EdT&6tm`u*+P"2O6F,"ocA`pDmºwP@$ *}lpPj'HbE-ĴQhm|hcіN#a#uc)-1l(p6hZ̽G(bP&5W9wY`X;#-q=O=OthϷIrM,8Im)AenMwx^HMa"OH̺뿿~>kfx\2t.Ͽ3/]He\";@sr/0ۛu H/`RpݧI!\eIz^?j.WK{2Of>OqHy{}0{=@s=>?)"ntU~5!J#im=Ⱦ#Y.$n٤ mIS }d#Wu) %֣-iig5 66_Gh4 ϵYfdUqƦ‰]̉Db)?s)jTheʂJ6`mكpV֬Xc&}lHXJH*]e9 8]$ʱ9VT)QZ/dkL?~Ϗ͢މ1PR(#Q*zDSdBψ͢r?08PuLv{FPZ BӑwG(G<ብؙ!\0 15afk^F6eFn=>A8Fzd6&dN*&"2@PF"0U0tj}_]=\@T ~䰈#KE(#,>H ^HVrl\:! sT;t)߮7)80ۨDS] }ad@+[YCC8\V>X-"QVt0zL!4TC@4'(Y&:k/y  8[_>.76xTy04b²J *S,VpS RZbbGMQ%`FPuů!@(EmykvO]-TDf]$_갎>?M萉Rka`1D+ tO-hqAU6"=#0Sƛdqo2C\0 G<9e5J$'YBmI?D Zјx#U!F=76."1,ăz&TPxP_VPTL1 lEGM%k% 8`X7\G%Δ)$LGtB!L܀AF1xDAQP@)huG@>u3½ [jaM)"ѕ0bMͧw`)u M=Uur` ń ~P`=HM^ƕQHI)u>)*Q7_r7(J1V`(iN@11lLJF0*wu Z*D/K lJ~&~4a`TǁSWqc`p:fRJ"H .]'x*J4AP:PE *H®O")K@$E ݰXan`r2WZ+9VjJt<;VI"(T$|"g\I(" W}!Nü[% @S U yruIz s]ϔHy }|$gO\URJ*5IKGZ+1k .?u'V9AϋF$̥ idlb?]شQvp6`oUԗImOp܉BwYq"0J tZv tFr e-=LEO 7B}J(ڲF7ð +6 ~CF٨~6/"E߂tPFљ.ʊگZ5 i {k1o$- ce6o< w/8ncXkW4r gBjOB Vc!:(~y5eEEAz@ًz3y삡l:+PF1!ð 6 aCp4 :85'=}ڴH%\sGu0=YdԤyІlQh*Ff`1d&/=bRH#%QJY Λ#tU [Ԧk@Re=8rVd[j@1kzH,ۧ`"εDgW-߻ߗ Ki ϑ} ׍|9t9x]ߴ1`6nգS'淋no#7=6a:pu6m]?j zkeU8A^z_-x.jߟ#:cX N}9f,ޤ։W_Www[Q+ 1]jќӦsywK_<w.&0@'rU^m<ʶO˴n'~jL`#tȅ{vFz|B '[ihn3dc%[e8ʹsPv}XY+Q2 6h%~1҅ 避W*d>tW*)",P7Cc0*%(n#9RZ.֠2~vʊw3/@9*xE*%N\H볇DE{Fh޼`')>g5o_j?} >dݤ[sN/EuZX "fsw!c1l٬2O#-| @`҉F%?>rîǝ>>:G~@WuyGŜ2#dvZ iV#pZnoz%85 oKnCޚnɀ>ƣ|a:~? 2WɂyCvYOjV8ME=0h5%ղ&%ˢq2ZZNth) t]|5`PLƱ\ eLCVoL܊*@y@r RJv mhR*Ɲg!PojD_P2$ϯUg2,]c΅ @l1 qRI'FݰsÎx`<~Z;hH noO<_۽*HW=<~,e#ІWO0;,擭tjMu9֙znM>{LI }Z0(Z: e,>CQP'Z-03ލ0fѪZ&fCMr`]&.0fө㡚-BzM`YM:ϼihh'㵅2P-S@ņ&Er@ V3M4QQ:F"0cGpHNQZ񻞍7k(\Z IPđ 1T'Ql( ?a Pݢ<]  nvsff/eִ۬M{EI8c$T0 FV֋/{8ȨJ>!vuQ" :\"z[ `񻓊!BAt%Νcf!/ٱ6ChD/wyòڇ 1eh͛WqĮUì dUxxoUl0qAeUl$~` W= z9}٦Ĭ: Q0na*Ѡ w~q0LK%Q:47/:| aK)yjp +i#P/ZxX[ q3LQ̵` X3q%6su^ :Jpi*,:/ Z)QN)a?$dыvqtnۂ LfuXhNSO*Ɂ_%U .-,[q"a%(,)${ kj/~17p>2yo.9aW\ JzٛK#P`jivL94MW4YsǏɜ!m`+3ځLQ FL:+;f%+(HlI I"TDC:Y]U)wvIFD/ݰ#JҠ^@>#vwHJAf~RN?ӉuY|'F{(l^cGuWW(ѱZ j2L[;')Ę OqldDFe2&}[b뒶h!0 .3{k?=^V#0m :[6o}UAN%Nhukpd/ލF7x/xhZpdms&QVnw8f3+ aWnjaf 2D_ ;L'$޾S]\Y2vbFo\Twf0* }5  9?sbjsڨNYm j$քTTa-f, dB '1"1KgV @i뷮5 bf1?7Bdc.dpd=Ice@m7z3 ;o3N ~L% ^Q)iQO:KI pY~ AXQ120C$te% fX)u2qPzt)ռ ?L 8ODB1Nʌ0aT蘐8&B(NY,̘+Ĩՙ  t. _׆ٔnv9dM4)hu?\LhˏJ nw|A刴f?-NA;׺܋F ?&9!} 5 z߾G*mayKZٞ8 MKs/Z"nVvp{my]un =uKIE4Bjviݢ{֯> [ ckW$ma{ 4AkY7>V! ̈.T O#pD'}WG :0X^^rQԘ'1F.[o?vPZ|C|Az.[wbu7d ɃNF[; ==l 1̆o1ҔX/oS٪RiY~u@֝ ςKQ}Em_9lȎQS2̌.A*Jh${ZEM6JMsjY_ϑdj_>L;CaHHmzOVAbG x ) #mRBIA ')2&I"4҄f<L'BJ39xJ\Cz }r"AkTGA%U& ()ha!D$iJJ   ڒ eJGp0[i}8( 孮)O OTF,g$FR?.ߒҬcdFt Q0 ǫB jh㽾ߨٷ!#o}xox>MuT7=sĚ$Y J0 \c 4J*DE\(PI0ξTp&%rŁaK&RlP .`!ڮр60@BhHqS)h=(QLHUl:3c8^' o7 MEGGru]r~ V:TPfC5[A/+Viϯl:uMe;8?#M!1#}(z~QqDs9 I84QC܍I<>gLp1)qWyܻq|Z/) 0KDaz1{nv'rAw[vGO_qDwA~=0fԸ/0ƖOU<5ECÅ_wA#[<0d;{lѺe\ݍ|d={k^nݑ׼;aFdbƜcpEDqla`,NWO #90R3'^\ /+!Y;1hL\2kC3MI*cǕI1U2U{d٫qD`ËuX~خWVêtO_D 7&|v?t;^$\t3$*nDeo5>=%yua;?'9\릲DnH_-eɃOr :)XPX o>v#iO(*7daZ"b ڴ;bf<6$(͗ },v尼Փ^AdvWxbnft)o+ ;Y^]]?ų(^nN.w ,%=6x'!!d`rGg^оa!:f M6RSZ$IXD0BRr'vx 1Cgש?vBa6'נj=N {F`]parvP~ h:l>I:fQ|\U2b͈ͬߕ}fhj4i"0Q"*][oH+_v_ a,'893}b9|bo?ԍ$e ed}U]UwuUQj\Q0 Y,rL*i[)_ƐWo:ζɂō6+`uQ\jE~b1U=dOPׯD܊Ys !kn&`b f,Xi膮PVek0 _/uܢ[qE8B{I!@*-QlᏠ9Cus(9=SllK]+Ud́rr&av?}(f3ه]>[KVqCصYdi˞,iٳpDe6~uЂ1< w;+&ƵϚM7HRѽ}IaǍ`C\XM"T\pbrfYYN3[|Ż'5ڊ+m=5[rk$ٌ6LoJX s)N)\諅]}<)k`q}kLDMÏcvmƐHC"i 5i?Za(?cP `!3停)yp # B*W8i 0Y4.tt4P,;&~EpǔCao;-_WhN8=* >,oӁ"crEǝ{4|'Sx$-NshD KQ%L#moX!mal!H`; C∾aU7T2ݧ<LxGX!Rj Ija"Q^㤰;Ccb$08SBZ5HibRQc6Lsɩ/ W\р_WDqr j=1]qGDAxr@@>%ZصYҒiKZ2iIP/ Ž"-I$: 0!u--F9ۼ`*-Y$7T\a!.Р G[0YTpϋ"Hb0hT@uBj$0 `z`cZuG/F^‡rE)ϘbFepa|Bs1LYq QM$YV5G ` )Cص`*$ $>vP<*m% L: A{ gֻ"EA 0L`4xsЁw'f͊J+U_xqh.Xݎg:h4][t;S5UIOlS. +{L  K)Z "$n,~T'U0XF粎7Y盕..KW[)Zjm7~0j|~- T{pFkvfE^6껻 5ֻA7p}[O| {;ޮGW?ftWnLԙ0dY?uO <StZ&0sT'-O27lkQ Ch>;]C#~1bpts9tїqQR:+_˒} zJî͊N[QI+ԯ(06vQZ9qfw]A0 5d>?)=j67jύEhx{uY3w &+!s7`w W9˟JŲūU$?|D.tR+B}\ԅt_bE#K,>qY͇Ϯʾ|!/fX oYT=)Og <-uuuu]}ּSFvnTjtCm_Ɠa2/MQ1y5!MB'Rl(VBMYOFO(Lx=Zo8}돳fv@8Y\;<@T#-Ϯq׎Ki-\I;Zصq8I$'tz'Q#&)ҷfgL!ngZؙQ\šE}.%k΀NߔSFFŜZ%[z`=e =.Ti {9bW1 s;4IJMھMYb.%*mOL'lf;bW). ]1`'5Xhf~ +),e`$]6X5b> 3jF~!_̹@8-n]׈%v6te^@vPʮ\xb`]MPC4\S\LO!yXۓc[woiʍO1Wkӥ0 2Ĉ;\.Vp#x]F~peZq@Sna׆IفN!Q=Ǹ^z(A1O(g}?1‚Ix ;y3-3q庽NduѰ^IeXW)9M4ñb]x?ӵUs0ѐ3Tp%s0Bc\w ^/vm洶8mG3;ss+Ň7w`i ĒՅkOϲ2Kd]g)Tw" r9JL"Wbf4!@vT8[+MR"|C()l{8-QXhz#XMQf|`6pӒ1yi$6jW)sd^ jf:̗+_6<seskBg4KVIQȾg'YcK4h=,5I*ف}NU`(]1V'jhJch y{$P-+chIZTl?\j&xXåv $(~3 NƤQ|B`Pn%z^"LA#wLl)CO$6}KP;(C NJN_1O!pcK (Un#鯄_&xYɸ@ބOIp^׌\$B])^\7H0F kYldfm!SD`0rˑS! 鎝s3ݩjRtAlG+m,\o(HaA@ؗ5F$TuGq1o)XOϲG ūy{A9pLl|$Pg e2[E-GPA=.hi_BוV"Nm8W7Fd@5Egɽ:[aE,apv-<7|iiy6^bbi!c.RmfO@S+[LeÐ+k6k-%l-P_I!|ތ}x"mezIk!Xkth m_9̟aWcRfk.Mj!A^5Yюr!s$ 6QF`= GN"\grGv`H`ˇCgۛ%>Ԣ;.9pսpH)ٯv4iLF'4G7 p\Ӧl@!`Q[-swPJ1{0J-Ⱦe<}JGuF _ t`=M`QK-Voe[E#DQj\Q0wˠ\,V5rn{eg%͖ S)ROS&yWXP֦AD1=":nX+}kuܠ_YL, +à z#(wRݼفp) #Jp c% Iiā [ r_O,)f8y ڔꗂv^&ߍ%O/$)`+*Hƕ>X{ L^yFvga>Q!8xaWh{1!}rN_C#XM0& - ˂%4uBD3}JgoI(yYҩЪ G{kb p=OB`ƆDET6\iQSU2ak"O \*I2-G֐B(hB9t|p7 /ZbRTqwV6|ffy cdhM[ʞB(&kS<>\ ñ޼pI3on?pA H%WIJJ8QgޕIV1 Q/UQ(RY;EU(D*/HI$٣'{!n$邝ܳ5}=WxCQNr$:.rmIZOMZ( OUuE9ۣJobB=U*VO.M &!;/K zj̼1x%.VV* >vGNuH?oNhF@"1Ѵ9ΦerM,Ӫd x6|Ʒ>Z#Zl:jx |1/ b7<{U68[ivnx6٥qKl-35J `1c{ ց)Nhak0͏A+p!P0ެ_f3\f~̻5e^^'^dv>x)nAp|+$c[~W9/ަbS4@* G$]NZؼs'I>WBxbG$u@Y 3nǽV[SaℾaΪr:F`V=wo@孞hR'fc,jI#[TIrBA2,t6PU{H)c,:IJ^!+BV_/e,.hy}0.޻E^Ђid. !q$(1Gٛ+Rp߻}0sy$A^IƮ{p@pNΊkysTV=iɏJ,Fg23.jLQL XBa&݌^-ju f^" E-L1 ~-Q.˘wywYc_(^<6Aha1:{ׇ".QiIA٦e^rU1Jf#g&Q]YxFDhzFD`!+F^P $'W9(~FL?{6忊?εvw}۝mg頠$c6 Pmʼnɖc@ڲ$C7yzHSDڡ>w)KpN"fSp$CX$Y=kR G*v 4J$$0|5T;kBJ$D:# Xwf27"]$t_9/hw5[ϕyS'0f*՟AMβՊdpwfř}+gxqz&^ƮHU#S_&$F~1dк݇uL׬QbѠ>NF1ɷ?s$lr`^߬K12D7A@x qxkYkV+`js"y@3=yT̠ss5 P 28~f`jpsVS԰LTI WGy3fj"'qO*)\]:6`?a`|Ѵͥ)i@RH݆f2 U)IxX}wJ))r7"9- H).uAI>iOTa-;f6'PKn*ū]t6-7fv7Dd><"L7GSS hh5=04YnB U\@{DUT=q LYLٓJ%Gk{\S^qe$|"o#K6W~b :A7:ӡsU; +5az[V12;Seb2'vfM94  ~:}`3el<d[&ә9 ""B$*6}anw|8]T&0R̘&']I׻3:col6>|yZ^Okdy<b1oϽ;fo@*f0 )iH6qZʰTqJD8MKDʘXZGV w%pP,"Ϗ őgEoӥq2֣|p`B(z-|T>)N(f!0e1O$w8\߸]; Ly${o:Ԍ}^JZ{t33ﯩ=6 ; &ƨ=wo>dxeǚ}. &r[k'/`7|ĩD(w^9֯6~0nx/w3<;F9+YK NE{ f]j;z޷eVm[`*4H!։fZ)dR[g- 4GgسAb.ր܃>SkeN܋%59f̥=*mkX#B$Q`cnJTX.=^ YM/UKYw a55\XNk%JĚβ4VX3S0RTrx%IqR\ɚ _1}hChvDexwy㶞dBLH7mڝ[N]USfYi+MxN[z'BR>H@aZyzP1!Ǧ`Қ7ǹB}ӄUl~x\ctv:l8vᅩa~2HХtO-{^*J꯯~\"Y|P]ٹ"+)4S`:thg}j'Gwy@.M|FtV1dh'[z˕ά(Lt2QQ}ic/bR*[?LR݌mJ o>4qcA9ćG-@/5Vʐx!ِDiC5EQU_r9!:BT B+5 P PAXqO]d⺧"Iԏ/Mb xl,Ɯig_ أ(v |OƓ97w寝WJ A7+Hگ3_xT<8z`9~43(O #_. 'V9U,~Ü[u0`"Y.>Qꇟ%J`V.9{Qf&5lovs  *yckue?(13i@}Y^=q~_іON&uOougOY )&e Mvqkt//^ No/ϧǹu _/\xqPEf*_/s-B$}9Nm7'CK*tߜl<1rm0gfON2Skgv^˕=(q]̜Fg%'L&ɲWJ|M@3,4efIpU6{}QZ[JNY^zl)9Oŷ6;)R;(:lYR\=_hŜ}n+ʅfbr~xbы(^e3`JAˉG!/_M?c/d_ǣsN9B3xg ;,z:M̰bvo%N$'M`J/j` h'pO)Le'zf)po0IOty r<+0x;~10}9yt9̣/_:coWVWz}\Xpi-fg﫪u&CP(Jb$JQ,#fL*hĕbJPdpbYL~^4,[_,*\~b"K`34x`;) >hE /,ݧˤ2o9K/\ɲB\3؋QO &j)R`VVZ*H$ ȶ#H1!$ =Jx˓0/D^:eLo:{܃h@Xt>T0 *Z9pkF\q@.O\q ӳɌqBLqq"LĨQlDk H xpB#͝&Yg4 QB )bgn&Иhݮ, qxn넢28fL|׌Zmf^:=?Whq^1Բ@EI!)\nz54cH|Uӊ7_aQ*†}FAޯFI FΌF`wXl̛wj]oooDX ,?TDҫk7iPa0ч}!""Y-D$CD2D$F$3iTq]HRh!&]$J#sqB"BM0 6S1k̗-hnkۂ.tj9A dx(}XJZEb>}1(֚~\̝t2?*@*"5Yju!  ^ɠIM&M2hAl&Q%IG&(bƇc@2QDQ~_^|T'j̗͓ 0ctsI7qntDWBZ恤e9 e?+JެU3҂2NR'(UIF0fZjd1<6hlD{ SݕJAwIkŲzR([Wd7{,{T[)y㺚WZmrSRAw6i:<+<ę/fv\0F3o-Yqo d^|6%{Ϣ}G1}53fP));f;>۞Ķ+0rT#X,^ڎeΗKQ$( kEN>FXo:.Fib;ڵ?D)n9Q:E$FI(5$#d"l J.E$K'eU`-F J`+ÏɁ.EhSOKqW޶~Mas3|ISi6;h[7ᜪ^PZyF`^ʻmXޭKRũa^)_KV 8 f9+I]=r?*916)DܡTu< Ma/O/ .&9Gĩ@CQf!)1 ~Ir  P+ cBd/$ЂܕwPաkIM豶}swpu&.l{RBXsUWS"HbJb*`R%D1cLjJ!Xb[BpFa艒HĽ IԏQ?4a@m 5o$$qC4VV2WX0Č:JU:A1IXemh Jv0+e5Gzt7ĵ/Ǜ@aLߩoD^"~;$f [ =;3BvA =_xw_|0֩DBT -04kQ¬tYPbŌFxCF 9#W0̓IE^Cq [\ے[+ܲqrK#ukA kI%$I8g:>B\PaϚm^ IPu7I@{ NꀼV\VÁdKVm{`m+$&4ضm$v$`m+ o-2ض-C lñmDdɆ$ $aQ{풷 :ȒV"qH8J29TH2٫˄(+Dhkm2_  JJD("cK%KT@8V4:E1匰Yi,C6CJksd11o!uHAYȣ?_c|1޺gjc7p59ВCt0P`UQ2T QTÆ` !9tN*7UxyKCPbjNq{j2'Ry~xC'NbĒvc.=TrH@o%~4#j6T1%\q+=syKDRFkF 2"Qo%5A/ɚyьY+ 'Ю5~vz \a3sDƲykl8dҽ8_叓 ntCȷ'MMۓoOOտa:|}xs9?91594#'Moߞ,r8dl$]?ߋoOv5슌.`*-v`\m%(Y:胳T%$9OE4IQK<Ӊ`#,2z5s(mL`SKzjZŔdShQ;OI*i34f5"PI='[ZX?O/ߛ\2 `8kλGoDN_&ɫvj:q֏>ۯh "a3 ^H MfzId=x0R 4Jq^iP[h8y1ʢ/EwB9%פ <㨟ut7b|9Opv$О@EkhoI"R(ݏm֧]#b&(mG]EueZ׫w_^sUM-^BY='ËE[9ZM/m*"׬&0/"&0/"%=J%cύ%Qq豏F"r<{n("mzFX!?U`u'RW A5SDAII?`f$yIOݯ-fWf 9?/wRˇ,(x26& <?j=+a2CwP_s$Pp{gð5z3w zBr[6!J)UxoEbsܧng)w we or`i|CVS6z/)9/&!L BDvcLSZR*(xȈ7I5Cr4&/ԂRAC6&DtpǁkC[ O"M$c$b;&-}bHF @gHƣ>66܄ΨC:DB6Ժ&:SU2MjU7MQU$c&%QPn!>DHTU&'QaDՁ$\ԕчBHRz[$Fhu/FcN(L NFј1:#.1"Tqy OF\Ut};,svI2p:ntn'kI"BWevHWT5HL KB1]5{NԔкy"ކրsGRl9͙JBy͙5gVD.YMԼ`Xڌ hM5QSBJ ='j:Mк 0,w d:|XLVqsj'U\ _YؓoQ09b Eg)@h K| +; gAC"0b@Ma_E2_ ~3L^t|>ӎ>}S\ę?a2 7p5ǟ=] 10N%by~MSKL)O?Ӏt6^^D?I)-O?o~OqD2I\|?W߂a_a^/2PFEu6ra!-X\.gZ3&\ߞ 4O8XeHek`N}1 ]} S.=_xOʞ_ajHp}%~4#t2Dm0_3G)0qNos;=jH?=~cOI\cO\1RkjW14}/,n9{"Z q$ei#&, &F=~C .(%T! NDC@t^K3#R]F5}w.ggNWg1=Dhy/-ׄs < U)q383LL(RDaG8Xh%I*b:Y@ ^>ƩGYοLo )9J| UCx-h-G F zp8)_7| ~6 ʵ{_-K0I4k(i$q<8JอƁ v"8,wgTJz9I4xWʎ;ewKR 2)VIHHw'$ "@l2]ph(-u,r-Ш0dD[P |F=Dm9enC7Kk(BGv@#e(كlguwAI=|-=FyBǑUC_"z/jѤ!BfGMX5:TRfA-G wܻXc,rΜM{\[M#x xA~X"I3lg2ec~;Ւgɗ!|gK#)e4#k$eWmff{ ;sz}\IF]oZz*[X{1%,w@RHN+mSpš?g=A |W G,IpA0>Q#%wQJ("ϣ %ZɖG4Nd C`PLӽn/++:8Pjyn~jk#  BLJ@KiÕLNzs- зv(%UMfDc6hfhR%Pr͜$PQ+Z @ jdC\7} ~v|[`r 5,pND$%2zN|L@ @]*UT iࢷh&d`W=tZ]J>A7xv6^sξG/?N.rMuEf<7k57x%L6pk(maIiWr?ǒ9e]puA=4p|gC8 2#舡ɋes4_ëg )<8稆 S!8⤓k{OI6]&k&ޚ>|tX h_o!۹-KBIŕ@@{(3Yjז,{\+%0;^ZwYVi]cs2eN9YCJctn0g Qvnbg{C ZGbxpWt'ʗISc|tF.+spZ;{gHUW>\573׭ F9v2-/aE ɋ}8gLG!%%vQ+d-CXmU+w-e-¹zVbw_ʾU>[Ղ TAzybP {5oN}7"JԆp*B4{ vɺIxԎDQwWȖ#f؆ӅmLBvmzP 59ƴ;)ȞDn{ ~Xsµ9`]8jCֵ+fӽ.@<DS$փ"kj*E,a"EXC蹬.:duhSv2AhwwKpg"z/:1JEXPa 3&!ޜȀ9jQGe#5S查HYzID1'7Ѐtckwv2 ⾷Y7-<`䑺טP6^S2LE9ɺcHTVsm%ӻbazjbqfㅛ*ov1gLrgih%)*z481ioRzh4=3b jS~ecd)8oyŸ<%e.("ӗ*B}6DXsJa"^rȶ 9Pqt0"vJd/Ec8+qj+F|()8'X B\g2M^fvTg4N~: THzrA$V)s0;!သʆ<NЗW5Beȭ%^3NyE^=?+ Nᡜ0ymo{:ĔwAM[ ?@|ʬz{مA$ aנp7UgI/ԅX]@KY Zϟ̋HXwyM[tSg=ۦ;E"Aupzj{^k#{[LĽ_wAC.@04^vF8с3ZҌ Ur1azW?(Td<וjDY Ķug%! eU3f2JU$f(|"6LJ YBcd|MpYQ- [YmUlպzTO^o?4jY>mW#3㡥YUl6d7X`| H ~t(+W e%1>1$*k{ƇUCk2=]_Eʰ`K:g|d>-SUMxxFDxЅ_o`D*7zD=tWsB(|ea|kWTi}ϩmсU 5h:<&WO0e$P{m..ANɻIɤw[f&}Ha7D_#BhMxЁtJӳnf1f6f ̚ȁUCVrC#O*Knή%Z.t;kVL{X"ڭx1m.2Qu':T"4v!ap+B:m4Pf ?qF->{W).6bX1u3hRN {Mh%p9QHS-c% jTd^;%aGD-7jLvz֤"y;HHHqs^z"h`z6vk)FR=z- |q`WE ^AB&ڎG$6tfStulok԰Vlx> qiƯnI`P l8@:6k iTǿ; P3ByׁB|ۡxB4H)hdn{krQ6^NMzl섪@A]is?(i,'x-m?5V?6?KD?*hJ"K6<'7 l>'/Io>Mg(zRÒtV`z.ԭ.ns#Fg~3X 6NdLvvK}(b?Z>(80M/q.0cƔ6l?-g4ܨ ¸20 vzcWS)}>hDBVèXDu}ޚ^K |jP8;~.WFf5"v7֣;)Ɏ!Yꑝ-<8!y]r!mYT\S6Lک0`h#y8[켆l=mA4ݩLpx >r 2qKj mt_TrLh(vU9fQ{ \pCBh>.iz2R)2А ɉ{mi7L1͚$fpK׾|[Rt8t53=\(_ HT{; XKc ȁ}<[oc8|A!bxsŽ $Dݛ6u\rilt/[}𡀲{N z9Jj[N䧭*7+1T(}2+o^[S kf=/~(n&*2TIRbXrSbV1#(U@",Kх D?tmËّa:F=@}E]-{vc3`_ ؊xoc 2F5Ӛi%Ş1_ "%(Iso(Im}|]Lʂ>xlۏL\a">.lL9恔R(Ren_ t6a.)!:V^O\&Iwr&HAw*vŁ(Cf[w"x[kYm8 6(x: t(Bi?zDtk-bpm6t@πTZ׎]EwT`*sopd"3LT_X3OuٙRͣ"dZ1yݼκSh2^pDQ=W1(J2؝B;y>D}w_Ѳ%5L(sQ{؄!&KK==۝F&0Qd8o? %5AZz`y+rO* J(LvG)YQS"`' kԖo{J4ոE/ˤan{w{V;:I60>m֑=e9.:m?=J^֥xv i6MJ.]Q"yBPO䧿>@4&.>]OW18cD%HH~Fl7y@5x6q?5gYoc4#S<|8jp) \^֥T?=Avocajr~{K{LʭOz FPlX0$lC9|1¼=Q܁SB!xT EkOi95Y-gZ|\:H2[ fr {X:=Q Ƒm_#40 HWOJQ%ϙ.9vz ՒqcwۮmI! W؋ZU iԸJHnTpn1Szp᤽,+!WDْK՚D(|S,8g+WXBHp&N9=eg= 2 Ec 0 oqe u|p12A)X (]" 2A@e>& 4wmmڀUzȱ[v8fB IٹPĕX$[V$ D3\>Pq$}\3p.p@)/!ŗ ,>NM@j6]n"~ -re@Z&M+XAL"k%|9okL*sO?KS 踟+TU$kjc_ZIkLaߊ2ϠZ)c Em`; rN[[ "xih?n?H5 8CX㣎15{tpx\`$h"yRs'5%G+5WD`s,X1 ߠ6zoZfi+1׫'^rbחw E+7j\I@(͕Y'Q5hzzVfnT˹b'3|쏇ЋOwjWYZ/!)kLn杠(@eiٛ)bZUcz`bb=Y%O|jâcM0XQ^ȑ^z= ,;͸ɔpT͢z^͋ѬUw|Ԓ3rۯd{m~,;b F bߞDfA4!>z"5R7w4/xdlK|:$EnejTڍI)f4=PȜtA"Gjsc$I࡮XׄI<[W݌Pbא5ؚ; '=0s${Ξm: a7ڄEjGIOȊy'`‚HsآML;߳`W/u.D a:ve- &Ne}*5I4/YcԇPwZQoQj:b) ?^Y HRS @mNZ=nY)eை3ڝq-^vFztZ G'8P:F$DQ( qPČ{*o;'$ 21IPX$FT:D3&YSLOo}SӔ޷] ]|&%b Y_WgM 5_UQ j+Eks֗a oE{ @r B,1EH#s`y}䉨RRGK)0n`6Kux9. y_nNou7v/])B 6qMwWSYae@) Ǿ_4써j;ܐ͌U˪z`{Q0="z?$V lE[.̽Tah57yF]{l1`@pmȈI׿%ѽѹ6G2>6V_-f'J6c1gSr?LFޟT_zω Q,xꐡ׎Cn =1vգd{|'+iH[VY .ٺCf9Jvn21 џo`u"ʌYW&(]@Q$E :YS*jALM6froCWZt"r|9O[>f7n2k>;(³pW97JeMn,x/ :6)97h\A2KĘ*ܝU`1Y#Q_@(R&e$j!$ $Q>W>Ҕ&H[άimPX"02B4B1P >CPbC%DāLL0 Ba8N5K8Ϲ ln\1ܹ\u@Nƈ)-O |! v.pF'cK+`$IC@DDRGbp%aDT)!SNC$a& IDq 'n.Y 3vYN,/6NK LH>%̓؋,f#\9jŽN~ӻ j|; 6y+۞`! XCJ sܝGKreSr^'Š>EL&q0 xV9 )tFDi 48!ไD H#Sun9Ćo7Vb`o0*\WV3uiYh2ݧRZ1L(D)SĩjifUxRyG!Slf,g$]Slί$wvl.wԬXw;֖;fRGЎkW~,kX~_?jF3֟׵zؾtLPwV\;x$z{ol>t 1@ 8AQ}ppfpu,~$x9vO vC-pDپmh00{_^]`zpq W:EE\8 4{i{nD,)hǒ ^ag BʜS"\(UT|5x݈6'{_*w Sb?0^Xf9cZ>>;FR[,)քd j4аC4_e肇RQ摧:c G@jQi8y$E[@֑Q'.".dsà70Ac, }>׈tc9#4/i:TM/(i _V5O4[] SNR( S!5(_A rΕQ>šTp/I!Vs}Yk&jKrF}g1QFCH$8}sJTQЏ? !UB$+od;F;f6`'+u`*!|¾)BD̗a0w"QS9 6Т/wuצf/wnt( Y lP?Uck ^X [ sK -Ia+IHqʹV ٌVDƇE5_CjjXxgVh+DV!g#kCoRl Y+誷 0VzXmoو-bђ>'C#C¨@0>A$/epF8"17x' wadhdz*RzHByrewsUg^EuYn Y%RzK >-9Nw\/6Rٗvy }E\_' s K 7rYg)?r [>f7TnYwS‹掁Uy. &X?F܉'Tp> E c1,GhOec!Ӯ'l+؊̞}6m?LvL |%] /|Z1npS/V%0P'0x&Qb@10՚(%,R zELQVfWAI_ 5Lݨ_KRuC+Tl ck~YI^,>{Kj&n<~ o̖zeSg&WO Ww#Pۗ0m*9֔؃?B/?ߙg, bϛ -CKP n/F fRI=I T|T҃|HP1DF]QXvly`VY r1 c!?/ZalLz!!DqqSKzyW?kk1xS5B]8VZX($5u PH`#HB, K%ޛ5G?/W6H.EeXZbK̷41qI d:hGII/\dϳUhlm*MMi19Ty;ʎuwf*ᳵ/5x<\ED͞@UI8~(%"0C%֒5qxضŎ z*w{?tٳĄnq*~ވn~ f'y8LYVâlăafgZ:.h5/..c93/)jP4o&#'2*`bՍÔIUҺt9ljt58Rɵ#vKoD,'xxph%̢l/G.ҭoQB3m,o39@'wQnR32̞8>Vj_9Wb0kM θihe։DȧqS^ ]5dO]5c*-\zJfC0Ǫ=f4aW0 InE.3a͋u?Gy-lG]ޙ'/W.}:F>sC x@7 TF呫LvXWR4X2J-jxC)@v$!_:ɔ &~֦?:ٜr1dWMt9%kvNBBrm/ScƉ)_@(f/9T.{ zUԐn;H o:1Uޘzrc^#JVI:7n=*$!9=jካtTm[mjBk/$X]-?/^t8īM|sD909g-Or=~]sFWP|/U!%;yؤ%[* /`V)<5 "2e 0ݿ{tŽO> ZӮ{qZ RwW):+nJ Be*R-tAbZuFGڻu[7 @(홋!#8[8+5#J;BlQnj]~ %5ƹ-V:R )y{6gu;bp|̑Bo ] 2A׮&i9D\–W1>vLc">𫬹ʱ[pʱT9.A l~uU vӫyѲڕ\ԍMy8 wyZA*Z@F%x(OP*`ٗ"6lAGPg"\vĞ?@45MHn"җw] {|IUjm\*ۻ`RU͞ Cު_(TVf pJ<-d%hEbBDՉL5RlXR%%CTz:2 VI Y#֔u(M?Y|[lxy %F@Ja}ė(Gl{]-uy]M jtF%ۃX c!<X`jrq*sjsoK,x_RuN)_HPܛϯ=Ћf>ݖ:y_#Xjĸu&T Nτnpτ1*4;[ի.E%/):(?=~ HRZj0W3_<45+۔)̮)L2[!uK>5Z'ltDZfR_La6= A"?Qo爯f_s"v=5R8qvMьՇ{(0fQl *,.5uYjSrZKjc:Ѿ-Z_%B l[4cΝ!>"!sN'e wc 1w]0UX}iզo;y $42_p1I ~}t`;?Y:CʤKI$)M`RA$q0&a1S!4HR%>u뽵-a[ /Vb;جcՔ,{2Ҹu:Vݦ{!o8о~t0J&i$8Psж1&4 QqPFZu0.WJ{|BJ AC%K(a 4M@$ȦNdK0{rJ L$eaӼcpŅA$XQ'i*m;GI13$""*0Rp*6 1(x,) x\V^1gd0-=/9ٍ@|[ZpiKP*i-Lsu1}їlC<˻&|VVw{b3f 1K[^y;^)h:Df7]}V~11UfͲ<,r]nSUpO;8H6uPB60BF$Vm dr R6'CsO> RsDr!;."qDPJ3Ef1 Ɛrt xB BDR:DBZ3E@? 9bIN@2m$l ezǜhjNa}08B<;<'<'#ׅW2̄_U 0u9spTefo)^N3W=^o]Kyc3kqz$\ub#jϮ+Sd 85ikvrC99ΆB(͑ӻ-#hVjJU2洔\5U^<)iN{1(pE9 ,*W3 ٯTg3&TbI[ݭxx\sVDQ]Y~c{.y!4Ն$Ufee(&8 /OPdΪ(I6jzZY˨RBNrWƸ{ڌd4D9GcWar*lqXز7s&RcJCҐ@U@*l !%pRydB;?s-nh?(aמj5n,bT$%&]"V>|m ]"Q}V,B’,q<'q٪$5*VYr;}_y3dQ+7Jҗ!M*^z ofpі R]yN*ET]s(ňځ2hpX}hN`j/,eJU!m)]PxQoo#ʼnJbWx۷{5hx(Ԡf,ڿ2LfMYK=>oFf9zxOa>O#p.O/WPϪ.'w<'U5ؚ: `'FU;_ٞM&Y B)':*l{52^ %$ ՘/>rhU2:hEۣ0K'4dlfin=o|S2ϟ1 @>KHk?@?=\3#? 扖 𲭏wTjj[[ kATr8殎Nyׇ-"{''{45B1HUdٍNz{:jĤXV_I*;m)TmRbmӡbCW gH*cb׸)k 0*Ԯsz V)}`0 #\}@oz5!$yK^u1F0b]W^Fb^* Կ7B!WWfEk35RByve5D%5_W7H0T3HWiG .5CF *ۮkpq*ؘRcraҊqđG\b)/Yk[.J}2)E52JhIYM&TDF i]H. 0q Zٖ``E_+`NRRR^)[ RiadJR"/:2|Ff+)3Pph^vn~sCqn^= ?ЂRIČ)$u1I·F!CDłHLubK}-{;Mv3/lՇcy.fFYN/{2~N'/+|G:xS"{Dp0N%5;"ɐ b&%:$%0>YR,pDACϞl}?f>%DA acv=Ӹ-0.$c" ADsAt@><DҀ$õD$ `, + 㜼RVtIsX^Y]hl7J]i 'a4y>cC6 le/[S]hWx41sk,:A wq{td`ԣG>"#H{ ]|#Wb8lЃxtΟܿ3Y1b<-G s7FoF})S*`W*s3WQ41%ejR BŅB6m瓡'IDDr٭3C=)idj90W@l4#{4 M䟳Mv.o |@!oew2~Hv>g&%/f=]d&~=eG?UR' _{^C/vce $9dK'N/ Llػ\>'/ +^̀=8²Sl+$'Up-|)cF t'C@bL h`X0>Diޛ\'%8>{i7MŔUӨ&2mq}Y~MCгyXֲM+gO=ŒlZ; ߗ,Iu٬" n[,ջZ ܔpW,ǙŇ_[[Z\iGq^4ƹG%` ?&ٲqr{] @&B#Sa/eb n[!2Q I׽:f^$SaLC6 @:A: Z$&Z7tMw4Q;spks2Rmj\L;~KapQX,,% q;/yAޮȒ@N9R, @pI!ZBK`>R 9p?I ەD'`&iU,E  Kj-TRJcNUI=ɛMN0:;-_ݢ(8 >|N{k>P%{CA9ɡM}D+7!ʹQ u$9la dwGkQbu~02($? npTB,*i>e1̉-E bK(.)LigU4Ў`C(^^  kl~,HTiv$"5况bQ! m!j%dsTΖؙ; oH!J*38߆A(ǫԗg>Yg3WW}%cPMyONώ6泥Uz ^" ^8g)y_PxWGW7kgG?\oA;k25 Z w|c"_7;kS A3cD$!)ㅰd¹ΰY*QI,KaK L5ΤJge jS.XϬyfb r7]V,^.yjLԿs:䣚LOƋ:Y9 x~M[!.XϥLu {.9=1-ZL-P8* ٌZhjP"^γP ]8UG01J^kƝm(w^CfdP$P\ւK@8ː2[-*e5DOC\3;h͡Ei^cnB&/1Xb ᴨA,XW 0ZLS(BP Ev!/%Ж"# 2Jf s&mRKxKGEV.;@lXcgX1c}zkQB |=Z H[wI-Hl|Kj8-t1n$+n&)DeBn۵' `"i9.w$ 8S۠[o~'o[qZ~U=0 b ALjjYicQxc<[a%47$I:՝sq?'@s2m$$V2T;uᒷJS=H.l2֨4FD ,QolSo9Ƀc! /FҵP#$zaD3&j|6;6nnp[4,;*(2全$i9^(0@2„Rg|en\2Z]enn (Dwds_QH(4U7U;.X7=hzHzH9zv6rG_3o5K_3aO7(Ykͦf9+aKzíL6vx3m5h0U`mE,aFR0\bi1D (H#t59z=! -S`͔%A=!dV[SFU {cD=!kF‘~Lt9-Gյ^gW;7ѿn_y:2v~bF7S[5|P>YE;{N[;~Tl>$,ܘمfqRSm+9#V[ =\HO0\dg. 24}CW$ٛN=TYS#uQ9,EJ;5g?pOdRB@ _L*)9eR+XK`K.!IU#&dysiiGey&HDHzj+@wCx:U/,ҴpqQtŐ(Vh5N[ZNw+؍wgĄ]X7|G<%BRZ TuXq4uzlw~Єfh0eQ.c"Q1K1<9ny*Rd$I T8N3 Q9D:o(_-Jcq/mgM,GeԚ7J{u8މN^^.SI&/>!q2]~ W>?.`ɳUafAa0i;=Ȯ3n>XH(DCgt۲%-'2zb|Rw2%@ )I}@is:ZY lVPdp4qϥA(ڪtU0Hm~A.Vw`b0&{Ņ3 3wf!31In﨓_i0z<<E4E8BOn<, Q FtRQG~xLm[%#rH $tvB% JtQKw9;햊hv !/\Dd 7nu͡C}5r:91::WIdRHCB^fnɕ[Fp AK%3 `PR($3 @-±]5"ZG?CrX'm oy[%BJ v#fШU:t]CV{$q98=P:oTaM,*`tY(̹VQ¸Tֹjjtf-juSC.Si㺧Nt٣OMq^2CdtCB^&vo\;`4jZ=F$#,pM)=F0nkSi#:﨣"($#rH asa$N;鮤i'agQZ; "Р4`a˟ |U #Eڱn'` Ijőc(Dm۷y0%r,/u?6W_,_R5v~1wg㍨^ d`LUh{״WeΗ~NN;:`0мޛ pw?|<5] ZO(B%5.SV4%Df,=, 2攝XBd"K:FaKh]8SkcœIFXse K%%dI, |veW‚F\=w#u˅$HهOAb-%'&3]V`އr倿~)Õ5S*‰  `FzŒdlzsaRKƦ@`Vjj]NlܾKI d%튲~=J41k]ߋ,Ur)ZN #%UN^&`&~"3DeGx*F-ro _N6~TNn) 4u/"z:OÀV08h #"~ W% ț8~U9 DȮe%+ tuCRv7ӡߘO(=&vVj|?} :+?Susu>E)xa@xGʥ@ 2C]{eЄ]1y?-$bH&~Ĉ'!BNNor: zASY`%ʂ)-*v>-ᴿ7TzG!ƀr @kl\S::Z;ᯬ%T y^:mZ^!1[m0!fYԞϲFx߶l+ε}ێvO_fbw*j xK܋xk>|X)/QS]"@tÄm1p_)DHnB\,=2c/~3ʷJN.{,|wͬ;5<;:'dUC}|w N0if.C>r aC 1ߐyYne<|7@5q9=! #|\A\}nz*Y -GOXzޘe+I1.F qܖ{=ILPЖzl.Hv3|I^O-.Yբ.`v6F gʫ{Qa#Lz="Iz~XzCxu s[Svayz*)V/,xz=kO bd0TY D*$ArR ,~*b[ mdzC dWjfˑb/?.*Vuxg+m~t. e1?WZk . v,uQ*\rPB9ťQPi8AB%O dxls <9L>VۛӇݞse_%4R1P_ԫJ>KX*6 g,2XY t g}e[5W~v]p!TYR5E)Ÿ]oA5%VR <3@T)NAc Ѩ۷7RN8wuvԁ1c哞ftS`b() šETp'J*H\[#U)5@mYF4a,h/ȊZԩ7 ,MI 41 *AU X%SRxYm,z d AY2W{Lj0#@·Pџ/|_"-oꑹ_ҭ'L;@A>?GoW fWq:Ύ@l xM8;p~#9GuLxv>35uus5vIw77npl]B[?=>;3g-osE=)MEqҦ5TdʁKeL<b=PН֖2N 1qr-S,RXB;Eh+!NG|HlֱgF7. wOѻ_؂Sav䞻G?]wӓSA>j2OZw_pᤶ>hWW5qXEt7g{~Zf.Rܫξ8Zm9.n~Q֜Za"jrhΎޭf;Fe v.ggbcѐܘ"DYP8 :e_wL)$2J_:c/4+{WU*[+W4OG__0J񲆼cx*KIfd4w+: kjߦUCM"#7Ev$cMPʪ2dc$KoaX=ZX$(;`7;x3̝YY_ G{U!gj޹sv^s'*q[Y_rC*=l'I5IL@ȒF,'~R Yxm $u7@wC-: kۿה'y1e!5F)etD=Ca=dv޾s}|׻>U }3AJBq F:GƁQDHa~݄ۘ>; ' > -1`| c~`}<j>(Eu0ym{4 w_魗l036jXX1q+:\$ xA'͘/d=AGnfoش*e`H_LRnSƶM !&k ȫ/ũA(SRFED4ʃ0FOa>LM~m4/>VlAT]"-t+-lיa73 ؿK9o*hN|̪kjZJp( r' U4 |0?MH@̀X j^88=gs{[ZB7%J yVC 9i<jsj3S/j5N~NlL jc͸5[WOn/a0J>?Bo'yÏ75\"6My 89Rr`{6+ ñf6']πO^Ɂa3ԑ8xTŹ'k<ͨS5K/&c'c߼L@⺽ּGL7sesؚcj[ z)웇f4灙üY9TN ,H#9xH a _؋*XnjZ 1$gZ.ꋃk*:F Ǩ/ Rqs8߁ӏg$*R^3B[,-ϗ( Z\>sE '|O8@I+o㎒>wҠq7.}?w!YA+EA+/P hӢ:}ih x3! #}X8^B8R[ {q/ PyRi0:Aᳶ8kmUGrM:ՖZYHi5_+C~ΕS0m+fgx^cyL0k<_І#{N^V, VT~íG0s͕%GOVI>xV5>8՗ajُs/9<73[>W칃*Wϵ&r5Ƒ(Uժ4)x2摖T^|<MN4DJP)q2r%NTڠ[iiq0T5O_~i77SY5 Տ+A&\*9Wu?5GEsu>B'HMu=EhNϺ(øI*0krf2S4ҙkAZ2].J[\)x4e%W''#0кwG , nbuByO,~j9qK?k9E5k5r]Շk#uV W XEZAXjy[jk6|[< װm7/j>ϝi5r"+^w[Qu/N .|B8~Ⱥ:!ĘղEĕKTfu ) y$]NuaGG5Q;jVm]U8JFPs]X+č]?.(5@dQ-HIrТױ33S?c`s1 /_ @|8xG 嘥aGzTE6(y<8n{:bk=>wF.0N=zб@eFT:׉'ցK۩b6z/.~?W{o ._LcCN|*A")B ~|! 2Ă9[jb0zsc  b1sj8e;9ws5_c XЉͰq=;/, BYi)N 6F,BsxaGIపRVW/+;mܟ^ vW+rvs""@E4}1CؼRjI⡈0T!0ԌE 3Ej 2 4ףv7u5BhC_F +9BPhc<p$%QDHa~ˈ4&{z[d. _݈AY#QF @fhfz}MoW>8e s8py)cd:UYIje'O'F/:`jdb26^~R\v}iy)brٷ[xZ&!"]~0CP#HLj "@(7C9O]ef(%h +׉]jf3NƿU{90:XEdľi.1ø=.GT.}u@ y}{Vek'\$Ro׫GJ|\%Wcib<%`@_{{, sknЋ9gj6a0?3^2b42ƶD@yy~aP4MVJs !ܑ'f@vQ.kryCѳNy+Z0"&t{V1{_bF#l|hMe=lke=LՓt]ѥ\9тmh͏C|GYX.+ T^}Z#AyN9(_ {Q,i|F>`ҌڷyQwղڮniҸj-DQʳ NURy]\ :J›C+gCKZ7a4&kd089;9j%+Q+(|aC}R"-c?4YlL>Xg9les V 6>R>T̯ g RZ/(P&StKTLRr*]˚"pf Q> cT06J0DSxIBUsx6JLgΐkH*LHꔡ0,=\e4C럽PAm80s0 E!GP3H~ČfZhߜ FXQɄcaa b}8~MLڒt,je}m' FߝNƕ;?X? x1 K:Pd䖠}y8ֳ#cga'Ct}h[[kF0wl9FƸHOQ)(Tފ$R2T˺܋Z~7{&A/G|VI4fmrhqpWr*$R~? iJNTB,W}?O*$4!S"QxJ Cf:Rc)dŘոI@ \ Wz1!?EF|USe r6yy3![Ž˟Ο 7` T_H%ic8^D 6Lm[3ru1{ٸ.P8yȮ;G^4/+h!:UXXobF1 Nz,NIqq&uDrZt hBݦc(-}灢uywvbpqQ\a=yBU+nFcHDv;$hTɥj`fAAꖼ!#閺'"}@*5'yCV1je-&x:yY6tA'5컗r8/]_8L/}"@*JIy,o(OLDqH`ضĘdrs!RvƼY;LKXuc=V1,Kl8OM xP3ؽc֎[;vcc2kf4!L*LyA!eHFGDi ֚#s.Ƥ%YqJKNَzβ&d$ mNAAX ś\%LaOKg4KW! h Ҍ|B(PW*(̷(8ӽy kIi-Wѓ+%hƜ?-$oeyҾ[䒇 5UxcmhzW52}9z|=͊_ WL׋v?Yzn|2]^Bp f BVls8PCgLFq8_woS3[8qkfklmL6rՃf#7ʯˤ%}g j͟--\q;)Jb]TJKzwQk&8h;gPXn|fZLvዔ$)P<]X`r蓟j-f4jH=UC}K3r_? bau-v'JҼ R}q_}7xʰTn-7fV*N"b?Xaָ[3K]{pՖ}{tޕkl&Y!C"=& m{i8̵ X(zks34CX~C}NFI^ѣl`>E##.>ILd"ɋQiE:>y_gIspqXUj3|WsDR6Iʇ,lg9>„bϐ6U ǀ2$+Z bv^xPF,W},֑e$, R "`FhJX %cĎ1Oj >i1_ls.8 KNظNɩZ / SX߮b:Z_FUչm@YQ|NA[+WƽVgcoxB~jA+{[[yrimo' bWR`I?WQ+,ԳV0X1l"VCćd̘ c#B EN7.YH|=]K~6]/Њ HmǫFbD*(*iS}&.< + Jh&2G#B!{Rv9) #gƜՊ⯇bc`@>NkpiٕL2[7I XBIe_<96Y*SLX'ZK <(V%U_>Efq*atv/jSh--?X)ĕWUC,^pQU˃Uz[.W'w]U@]8*R: 0R4RjcɡU84 lvذ#%_lno39Y9o|$Pg2'0 à9 }򾵴z3=*AބPP9Uo_=vef42$ү5b> =j4^ zmʡa Eލq$csz,BEح= b:чu::lPJ'KPNfbBY,1nCvFdb>K,kExW4lBqckc$Ӝ& p܃s"6b@N# X!RBMbUaK-RnԤ5R[Xc;PY1J"Ә,rʣ7V`Kj0VX] QGaD&F@V  SKimMf3[E,eZsjg1'Œr. ?L1< 3G# kmtAV`i[yf.*X+=jRی]ӎ s<9|O{`DY8k/9W)3pwq "ё58Nb$UsZNN{wSE((w{%}|BjIݥ"Xz'%cB"W&^;O+){ٓ҅m}hȊK]hiu 4O.l$fB$G#.X-=9YuuhJVx$4jzh`hJ#PjhJVD F)_E VB[>!˛߯{-Ap"̖} nibT)X%Tjw}>nzw9^~~⳿odwвHZZvf>6,dNn,Q~3Yb(H;muXÅ5WZI+0VT .hx}7mYl| #[HZѨ!ͱP64JֲNW0-bܣJYYCՎ+B|VsF褦G)?P$u`)S n4jnG(R;Py`:O3½ `JWgQVT^팺' ae7)WϣUuf0]nݲQB#FY|vF>Rͪ~.A /nh Ȯ>(c9DrGu˂Lum+/YO/+}~(T4_~ԞMUJՈZҐ/\E t Jn8x Vʃ) gL5%[1ֆ|*ZSŹn[DRNiMmH/1zCbBsѭ UD,ot |22I&tg0G/CZ,30>! 2d3N#u1sz B!z;~<yoq̎D3P=?F} :;(N4 'h0t'ΖXpUYKbٮ\}p>Y󚗲f윧klk^.5K_=$+f5]qbB@ LVhvعv5j׮paX\OតU%Ď$֮rРh>&%Z0=d'BZ+SV(tV_O/񰪑SKÌ{ӺÌŜs ן3p~Ttm&pV̜S9rO-r8)$ű؝2DA4>p $=_{=9:i~%cQ^:2S`;@ mb5ڵ;hTLc\{<;"ׯ Pu&C9oΫ'bLJ=pw;ݾgp)ruFq콙=NUQ֭g'H>sxJm `%dL%12CIA،mfԊ7ˣPZ`0nޞXkIEʀ$Bj8,0q)$ h fEAD:&DgrW6hM$ǘ`Xp'8(0+aN4K2,qf23sʙ$VTc+eJ14- kB FS{nN x̸_CQJڦT+Gwz;>-yJ{^j I׿E^Yz P& /-%#2sя?8lXهMˋ3-W_#W_gffUx iƖ#\l]]|ͤ8aI&9Eg½{{6y' uiQAK+őZU("m6*=1̅ ňi'9\eXY0(ب .S.F+C[a|Z#:~1X Xsٍvt0Xԑv H<$jf,5n3|# ;PN-.u.yЍ K()q`t+$V\#f0D{f#Nhɂa.J4GExd)]OZڅd8G2A(-tl#+`h˒r)cp,Z[Ab`Z,(8QK䑣e@2r ` SbfOrQs(̉KYI(01?qk߯/sM&᫇Bnw{\˥0Z3\iORpxYet :GO j@۫p{:7a12,]hJIâKzٵ%rFqrH\[N^fr2_>2R*IB5;bh\"-֌ bP, Փ&0-.dpkoWb= cYj, 1V:Uz&aW$>؂H3n_$\BL®HgI%b \RgI"Q 1K>/MXa |+&1 x#R dPNv)ܯo¢VϹGMC]*=R"ZBėmqE L@4ќy{tO`ll襒~/h.U9`WPSs^8DpYE\{ɰD>C`som':Ha-Zvm(\UTC 嵠87QWDRhQ[4!!F܁rNd3;R6gbVgjC՛P}?4Ȟ\(s>u:/Ncd_ 1ٛ:dE}: 3eѥnGH0%% R)#y*c}4DlrRl/ݶޙ]fh$D %DLGTb9E$*iU H7 ,F 0e5L.ߨAQ)RGEqiRԐh T.D5=F,n/qiTĸmK3+;!c:$,X.E䤲^%EBQhJI;! 0$raOCjo+j#wWix: / `]QϿ0?sO2$it9Cw.థw bRTiОٚL¹_AiO+?߶vLy } 1-hsB DU\Z"Ǜ\C~i̛o{e4*wx- #7'OSG%K[4%& EWt"qВ,ZcN<ā 5)DErS`!TDe6*ڬLP7r0!_ns-ʿquyۧf.X:?[5_7wM KW_}~"!DiX".Na"j%q2Htvxtqw0R0B)}OPэ8tGCtȄ+"OWdkq6T6 6ʰvϿ)nd:O@o\C&4\8@۸ݔҝϗ[,NɠVu}Lp:^Vהu _Y?C?]U E8]L\H\1"d6JAtD% dj8%LͽpwРkG|t$Vuo80cӟG(t|ivɌTR8d*ph+0 Yw Q^b/M#Ǒu k&7N":*TY.d*}4Nh =M#.&ZXEE6:3,^" W51ؽf+(a3@iRjǤ2( 해J29EW3@ArCYCFD?SG|Dn9gPQxG{b}vQ6$o!tb?WUY-c!/="bX*J cP 届fx8VPa \˞;jaݷB*|QqV@1ű \FIɉ #k*o׵ 8_qB.;p!׊Q\F-ׁ%/\4Ҁ"4lkI|͓1hGc$Q/|<"ڱ`Wa}Yh&cA[q2)ѬAh`AuC<,GgЁYҎu&nV3Eьղ<1 >Mkƒ\ @LYi.7Xd f9#Tx;xM5F(D!#zT[Xד+x2jmE4EȧuT줗{[Z -20hdz_NCpXo ) w"SQQVh>Vq *Vf/r Aك h(?]|H=B<3>z v<;nO~xBfA}7I$q31w?34&ۧTOeAr%7ޞ]E7d*$D RdVςPsZ3JY 艎]p1q_R8ogXpM-2%6/xw:j%:fڔ;(_w NQ返;y"*)5aV3٨"ZwnF6)8L.Dafs zےsZ``ֱn 1Xkz* !>؏ښ"0?U>>7۰h7|ӲC!L}sގm# [ !^Ө2<0 oxo .mhڧh5;O$((ʜbq9,}d2XNiҨ9-5h2Hڬ!I7G8AS%"G]Xb%9i I~e@(%&, O-Fhz浗Y .m^7}8Dsvgvi kg|yzwG!7AawZÅ՜eq84_~8x?8?|= g7dk0\ޑ\?JjAI4;Bɺy#OmdiEה7rD!Q( ۉξ9G[?vdΌ;儖9F᎘Ç@MW=x6wF|(>v!d&ߩd.0}9MOBde0Sc hTM?ы識,IC[}[baT7d|FIu!IE&h.fIoeߪEՃ,JsWoY8sdT]V2 4GG-c u:p\IYB^Yz0Ŀ-g %^Lݞ(/?~Ygi80wL/{p:3+g v3Z{@ba֙vZZ So(.6Qh[.]ߖ#rT DE`) fE} `XbfCI6 T*@u!ZD"Vsj\ UKAwK[SZPK=snt;\ne%e.ﯓB.bjyJ5%뛦5Oc87URZUʲsClġ/ /3.xwht'ѐ@?؂ܞą X,5ՇD4&Ůy:1Tݭ#Fk񫦶]}qEƩ+TJъ'¾gI7kT5?Fx\7T]ߌǷiaG:yr߬-) 4}SrC=)л'֮G[kDOq2v.ͪK#]ݺ^'n+)1\%y8+cTb"5q꭫n:oB"\ێQC.R TVUݘɹ.sN.sMi+mwL5arjFn(C>TrFS[jNuMF|qkFj?;]ϩߒWW٢bD z}띔hIFLZ7= މ RFM#ACd'.{ivasK ɮ'ry}N˯,§W-T:(vq[l GgT?T7ųa oˋ[m~~C0S4ӀCm\T)D>yVzv׋P HB~"!S<{iޣj.^wh&ݚ)-)4hrI:>y-}m;0.YK ̾qO.rL}&l|[vflzƭͳy.- '֫l=i[,0y& Tjg@^[,h)6\Q֖Cd<Z]Sֶ@kp*ڊ ʆcZqbԋ+mUΖrK,uZjaZk V%ʜ#*9/Ԕɥ pƺF"Ab^MB<7@|!7,*U:{(ݙ;dTvY5.DUn_VuJb CK$E0*6}MY\M?>mh~e{}s}凌_bzOc[e6;9irۃkSi>]x]|9'R+8( ڨ~jGLϳN.FgbљS[NKJ$K$_X|ʄhe!n )Ts?b8aHwc;x?YR+LL']@%麚@?"e_gDK̘{?-9Z<13wfN:Y0hF-{ڷZH[Au%z*ai]hըa4$b4^U]QjZQ# 3ԀBNVh,*wuJc,MMMD8l2OQ?Nʥ[}_ږSo'K ,}hvfm{wwS*w^+nvp`sVͷ剮 V?Ntdme]dhԛy?vfӷq+AͿG^Ǿ:}U`^.!d93XKpKXV9w:8~8%82t*A0vm;"D| SuCV й&%N,t~Q}1N(Ԥ) KNypF[σ,\ tּ~ӆÁmo's^Iw]hGq¦l eY<̳JbvsS epM_a%UKtVB|5@.s!KMaU?Z@:+150Fr]PKsN{T2i)r@cW ^|ټw;7Fic$'>V/e˜p#(~-@˥wݷ٭3jĀrw B"KTE- .m09,L K+(ĎN5`-bX4 W: *I:g4\0#\ ԝN+|:}!FSxN-sq3TH U@ݨ(%&DR ҟw:JÒ G?'wr屆_N|tM(?F}7`)4w"TI#lA#XюBi@%E$i9Gqmҙ1k ^ A E _ =+ Lou% pND;x,ھF6q|E[8.90~>E~/!OkL~*BgRC%DK_Bj*-%LYSlG(\9%d~~OV|*N(rQ<#bELg".@7$vEJ. ǎ uT:KPU.Ԓ9#FJЅ %<.ASIJ,H,SP5 uN5EctuíjŽf!ta_!_&Ǿ%\gKI4h B(i}E9W.:Xo50f%Nvca~~XISGo5}fY*m1G(b<6_}L{ge`Na\mۃζ\Mk~mZD=z5/UA͎YƓFrv8%60jexD#ø(E. | Bz6@qs]Lnv?6&lcc۹qJ?o~4Y|3aj+:tPJYy,Zk[_~#:]$Y"ㆨi32%K}"Bl zczuK5r Zn^!+ Yd45i b篒>%|\#ml[v5@I¸ 62&i7BS7 Xʫ`vtwHl'lj>uv]WFF$q{ٕՄǔQ5s#V.uGWQ]|%]nrhܷ4&J& r>U*YhVOS 'e蒧 #p2|ǖ=̬^ ?כ8@cږPx(zOHOœ Y\y78)v/jK q_I !9@Wrh8u HHjFUjq-$r8]vJ,T߻gz_i%'gWrN` I6GY%A~-nKrdy,VbU]?$?Lgۊj+A-Gԇ4TE&ӭM*@+cFA١ܖ/t7V bK@/6G תe7@PI>mÒ͖E>Tyɷ8tTlq8o{|NUvotzv#8Vot@ۊVYǀg4a O%`Vig?*&v,E|9^=*K--c1ƭsJ*Evh/bIBNƊ#"⯳OWGQ7}yu1ZYYYU/nK*3 i#Ï(s*2 $+~ҼE8QU? E6~j8 v:xfk E,[CR|1-,bQ >и%v=&TƒE0j'yAʎ'+ed7r9€zwȬ:+y 6=DRJo7|bfF?o; <}{zR W7 Ry&sL+"# _(]`1\%$NO '.ou%ۻOChtu-Fsg~:ߞvN6xkNSN-ɿ'Xyhç|shI g!B]I "HL%mO\iM,1IgBb2q! *g0OFq!<8! d.\Tp: I*HN YHHc؀H Vy1D`T 1 eb4}wYyꗆarU?wJ}?}2Ev'6h):@ʢO6)t *ـ iđT8O[,RI6cȩ5yf׀gE^ 0':G NTz 0Rь |M"hdEok' {"hXX*B"$\ra8:DC<h5$:7Ζ{-EE@(vmg.l8 8.R`TIg|9kXߏ_Ak? B1-1s{t`F6=盇Qo'?MuZ9 MCK{_s)}WQ?wG}>lcA'ݬU\iE9a+h#Ni^uÀukA4mc|[֩ڝuki`+hNQ0{ XnrnsZԢN66nϖ:ںuk*hmC^9EqJg W&5Šuu{Lp);nk`+h#N MݚbP:Mĺ=ߊQi[cAZ6)ڈSz4Zj~XԡN66nϷu*ά[cAZ6)ڈSu3ŪBS PiXg):aޭu[ ym)oV`#!юQ:ݡuk,h!-0S]zMWxN߭!&Ɔml=KwXھ[ y]┬)c_A g Q;Oit4 @$tSR1CJlPs&vM d@*.hT2\=#&O11<" 9edžV{\%Z)*92&Q@HF GH(8Y(Big1i]|L9K&H4!X ]&ཏj,oaJڋ06<sx$.D(Bu懜u3%aJ3y68x#1 IzUHOhUm3iA3HXas&GQ!LWGG?rGL?d;'>~zG{9i6lwGFI%l@ -ŻO3eN:Ts;Ϻ`&.:v5DRVrνw^t8 5مj 6#1nsm.5^Hь! 2t@OG_Rf N@XON:Rh4 aǥ~W/♶:^mfw|`q?O~=@?]0a}ow?\D]|~FN?T.}oO}t g1 [Oo:E'Ӆ|; y9৤[/ dz]/wF ~ x}=}+ n]\p3Qμy =#BK^3EOjLIt]K;=N3ZRc$8X-}J8CLΐ /NrDPY¶m@R}ľ4@jg75tGWl U{WV\.ܽ_QH_7|>]t7Q=ȱ-޿=j}yVמŽWMԯ3;rA׊Yb<iWh\H@5])]7ދq!\NM u~ozPrлSnZVUQVjŴ{aVLlxܝ9lKoi@}aL{Z?0[R ;- .{\yr졃0-TƘ$v BR]3N,sQ6IoL%ebTY+OPHaC ||{Bc̗Vs+=`m,c`7[D&+Wl&=Ujs<].8ChSBg\:\5&}s\еb0'Yb"xoqݏaZmV$'GކFSWbS9d0(O5ĉee+I*Lj= 5kq&8׻JlZA^o! }J*~Fh V69h+x@Y%hAK;Dߊo 73{u ]:t<\7/n6FA|.fyfyfyfyqԥXOy " 42ǍƵJVp1ZBK]1}\emniCŢTf"SE釸\$7d* *Yt&:f$\8sҠt29l$DCrC>iT{昈\Fpƶ ~`4Mn]17PY/;t[񚢿O>J}\ܞ{sP{6rdE0[}ϋq<1m mͶ5-$'qu.֕fGjI]dU''Hwo9G-\)4};X0.;ٱE@j{Eٱ2'-dc!fy@ 4fȸs[ZŅ uz|~'3ٲP˜d ʯ"m?]WCNA w6E( IԯkWswCHkuEWBө߁SOfvQ6Edwi&?缧 Ӎ#bx/aR"w_eDJn.Qėqb Pnx 4{8ȅ1,LFQ0cX3ԋ^T[P,ؚ.x{*xy¥|߳)I/UFWfQƧ ޛۭS ([- SL D ϛ!CP#B͂F^ rSwشNc2^F)֒o}ddE; f(A$ $`R@Rada0KeŒy_ЖR]c?o>ʂb:9+ձC.֊ `p`bcf]offg u|xKxKG~1n͜KxhFŽO:rcY"֐ͱ@aL ,4Іt\(01P0Iv3!Ǡ/jW@Fwo19uUNRFEsd|sZ7Nݮdt n `ab2в+dqm0lYDž&NgnDS-SUs*}R&XWіݾ2@3WUA~/R1$;dLzieЊ1M3cZ <@r2c&Y$Y|q}\HG!Yb-\_?u'/8,*b+d > [ci_iN;:`1b*e F$ 0,"dCK̨luEDz>+s%ӡR'}>j)^53)7ک8}QtRqX_jQFZ Ms"zDXXxd(8xW!a]1!ӜaQdl8U+X·p\9(ev<"Dn*;?@NbW1FaMq YP&}Pi̝6!&1F ɢ"+CZg-0.6k /[BC88C;GHBC%0KCʝ)0k&,98d sFm:rzUL,/c!%dRkI8CxU|pG1Zfu|rkXrXýO}Ш+D6A% 44fR%RI.s8JeA(|~eolBl^`(6`Q aУemaM/L8-5 [<]O Z n$m\1A,ט}]\d~ $%%6SUYxܲCL\at驦wj~*pɗLfY2Š-bZdlς2*;vC=7mu*"wUc֙F뇽 o#;@u^_Z[0N` Ğ]`:HWQ(_ 箥ok0TڝR!hA،  H54C1-qbW{9<Uy~ suiZF > X '"-3M:W uEgrF:FZKy6ѭMk>U2 ,%$wQɚKA xMZ(emG* Ț86 YA[mTMд!8(lU:7DBr~YoeuCt82|f0IkY\TvV%hPEe0OJ57ɞOS{hޙV'{y4s^e SJgzL%=vW흹m;ʠa8껪{ãIt]$n<@95ڷOsk[ 7ioHcceˈ["BbkŖiT! (&鷁ߚ#6>j~J6 'lwjT׿7Gveo')ôz$P_\T/קקw'}?2-ozc^v*OUXGlMUQdJ虚D~*Omݿ^Ԕ9b__^rDd7ެOk0WAԩ`߳Y'iw' om6"h[jDk;zx I0X}.ouEeb } |ϼb!rp Rfn+yXuݠJS]y5شRm[*(Jkʿ Ӽl?qTN_.Z~6)(Շ{/Py_m6_˛!IGF`5TzM޴9M=g>j~d@] Z6(oW]`J:H<>YKPb4,6n)Q<=;<)dMJש1O o%C$^Rڔg?bs8aVdáza~ A(<`@^QoT-NKV x:Pv@{nDq x$34eϑ&s!"XD$`L@!gdsDq:e?hJm-4hP_>q)* c%tm 2L]Dq Qf/"T{#Z0d4X* GC f11d )C"2m)?nt53[ۮ_(]y-Sfd9PRFL"ta-9V͇L2O-zv`iP2*K" 0r|B #aa/ad4JXGDFZd2l8n?/K /4d13ȐXc4q$LPXh`"aƊn-/HR8:NNK`8|R$\9LWYcuћI*4\D룳s\=(o1ۓ1@dGvuP۰id󫋳룫^U__.޿9=>d C:r]=:>=Ϡ]뛍͛Y0}v.Om7†ߨnv̽kz׳IG'gSh.O/,!'zfjRr~t5NW'ۓ5*weoqF/F JFTuqE/R%߯Yk'@C|@Se⩃.vqbޙ* eJvUUzV#+R:T\irXŬ@]7W$R eO{xG'w~Tӏ=*G" r}aJ>*tlLGb-k+^72HXY^Hɡ@Z`m0>YM^CrF,>AV1_!qchK=P2V-} 8+#9ջ..RQEY?eBSR RH| U4`h>\mV]aӨq B%fɶf@(YqFI"V8u xb㇨v-eo(49m*2,͜ÐX6f\lɆ dE|M l<9Ԙi_h])!kRǦɋ ֡+FAy1 Blsܦ&:bC}k4wC 0;JJ!1r/\>UBD"q_6ͭN Mn?PC/,BlH-WM4 ^E+Kbffz9m>_AF TNIka&F4HJ]A% !8CR@Ո ~;hr7h4:c8m>VH:qJDٸڲA$|]M,'DBFkK+N =xﲷZ[Rh oFu8myIdP Zjb#$$vEcK*xۚ"&i­h~ݧO",-wMٶ,k#6QP"XQmΉ˃]Z,n+` yf? r%ܔŦ_b_ݩNZ9OM죇xE,]z~)=i1`9#ohoIl2& :X۝Z7 Xp0jFEG @9􉏣OQqMOg=Q-Fcvq\А_n9ů%JDӂP|jgzf\b¦D $ VA-nGnSꚆm%ZqV"P **o^i[%>٭D,M AQSL>&oI^0M%6 JXS`{0CU$D$Ѝ!&(dklj`a1{_ڞ.[Xd(K R}ۭ߯d|+iװԢݕzŷwfxEiewe_,fb*]E5WٗonֲoxRĂ_[OF\p-ѺV>"iai9٭_DMo]}F-ۻ} c-+yߪw/CwKQԽg3A8I'upNOyp0p27V#V~ZA⪲3~p+ OXӳyz4dم¹PГdN?'HQy+4zF ғIYP3D8}Rvbpy(ޡƈpfagj"\}š^[X J.J^$!XMEB­!OEj\VI*A`Ns;^Dz}ZeEIIj CX+%bJ\uMjJX%>Tgi3f}^%0kOeM̀KA[w z 0/E&/lpKBy0BhXԮ1{m[03?^ޖb `=GKv oln*n^VQS_w|=`bigC&b< @3{rhC=Aɸ P-.vjl}ckµ1%ݟ-FB .fjcu]ף3&)j5ၷg>]fˮ8/j0o6yt#fly_0xg/3H18Czaٮ𑚣}v5C9E"C9J29@]?h7PYCeü}; KdӀǑCw ;8%qJAn1墋P#tpN5`+`Zd%M@fg7^Sh6 ùc3cO()*Ԕ!Gh$-UJHޛdxb@8U I95~V x~eYmOZZNн9;}^YmϠϭR-Z۩~Nĥq-38v2KcP@j:R16%Ϻxѐ֨lP)JSƶ:]La:RգQg R<,SFR"! N,^<גVɒ> ~V2Ff){oRDnwt~Êza]s8u1nj&8y, #nE($fvia>» ,z6Z(x7 O)Z6d iVv> N6n~?XDž-mX),lm=Zeq/qxO^ ^쵽j.;"gPŜiϘEj/-ua:Qk.)st)OZOyOg+x{@,(|m;嵐[*"k~LK\)LG/|*g~Ӭ@0 ߞ})÷k%W?A_z#1&7cO~n~meV6S5#wvd˘}n {`:v$ y":Ix;ݼhuF6љ!|,EtLxsWgnS1Er'n nǐ'.[29ꥶpurf=ϗ8}&=zWݎojǁ?~ooךU>Nxk@=ŵLxKkt6 (q9b9h"]VL .Y8oRIBmĤN0뼝׮<:oZke6GL[]B@SvVd a(@/)t Vl-Z3&F^ ^j'E-ɮv.-vCwa륭 Y/;=Yv0N%8~^Q8Gz  lSt~J,*To?aLe@n^i.fLUujtÀXgoٻ>ŒAܲwgX2^iw_,Y?}r._Ƒ. '87b8Œ~XTԹx`8wb4+ Gdre ln;[<>e4hMǧw^n>1{>@ -+j|4z cKq~j֝l~zn̲^GޑQd A2%vwdɹGɔ (hO&fJNY2%Gz;2%'u3v ape6Fsw/q zg{a0?=Gh2C7\v2'34Ȧ#[L4~l7g:X_/)H3eKXgG&Oj_.yj88%p!gO7z} y EtL9o(/=T zDg7hy_'l12hv;JӘtJ%a|Oi+&;ҒR:Ɣ0o# dcs #Vpb0.8EC_^n<K_}P|Yh_ˀ'Q1q[Davbf}tWC'?h_bqfV>= Toz}ϱѲR@HKPg7=5,@4?(YR*)$: ]BrԵJ)q .,R ,OX8=O2RNSƀuSd4_]!tT I,JS$v P#(%l!7N)!;nNi{<(u_v/8hWH}Ph&ʍ<1g̝:®x}}t-֨{*nU0 /*0꫋A0*ĭ5~Cq8p! ]#-Z_ͱ>ORؕV)ԔG:DWT}(;Ѥ'!0]]: .FkGʩq!}8A4-=El&tЭ,2TpŜe]d7!ԠTo_кڦDNtbOM9Zۖ)$^5{KgYcp.ϵ(K5i W3' qBq$ilV rXn^xa (ht^ZuyBTr!7p 2V6(i11OlQE]X"1 BK|Eh^(A a:\}ǞPf"PQ2VĆTM%׺r-CGg\圆`,*~G{Zp‡2gIƸJ=lDQ2V"j\ȳ%uE:26n/>tҕP7֥ 6o#IhRm_x"~ruqcۖV1“o)əPͬ,^`&A*:{n;Ot"|`:=pGnxM+.ZC=uw/)ߐ+}ro+([U:Qvn}A >#;WaYQ}!O.3gnBU:r_[lu0y<gnl,bS郟}e ;`l a2p$3z8I%W[O6ARq:'@ss>UTO)VzfJ:\`,u,an- I3gql S%w:,:!!RK1Xxy{%g19U .Q\pjZm6L :dQ\FW{1rY%1 {:_>=pl~z7t)L}}]- cfQ&٤/+T~//MŔoOfoN$SDD&vqUxr A0#\0)8AX/m|ǻ|ك.Jå'%v"7^glyZT {8 -bӥ?KHOUB3U2Yάτjꔖ*3vO+JtIJuMpB,䘌[ t .,3#Agm*p %wC׋C٫!Jkso #>%(It;h@ќJ,˜DG7-˻ޛt/Yy:6 nNSUNZH0UxFPec0csPC۪sm$;goܤ zZBc~z@@{&H x8&+J|Fdj&;zZ┫%>کRl2O\RMǒczwCˬ\|1$@4+`e*U^#BܺԄG+w]~|- Opr<- Ɉ0JkL?Vۅ^v$! ̷SbH-XAoGq^r"6р"CbqD(K_ZM(xآZ'|K+QZkVw"Vscâ5MB1? v+> MBqE`hw&S"ѦqIY).50&u?r}YoÂFt!нZ2T\{2$Lc:!U; BR~?e4׻ݲ|+o:o^RR gOf^$:{u҃r&禚Q3;wօW ]AgU3)@h2X x)qLϛ4dIqw.Y=S V s7~}t-ɀ *__yK%f9 c Kd0XTS&-}e~X|H?۔9Fw{ۦm֎ŬvWU<H_F3n"RZ8+dP$kknFŗ=]R_TckS=vId`,I!);Ti2Ë\HIt ׍FNqmr!N3qY)kc )Cꐗ%~eD;xsTųyr6mcF?ufbDJU,A>)ÔYEhnݺW_l(=6nW aizÀBG}<7 ɩ=G^D4QkM^ެ>Mٰ7Rh$I &J  tD[Xc*XDLC! }:̋@c4F}„9FOI : Llᅇd+t|xgB\q nIkvӒ&PmyzDIy޻~\v7mU2!ce#3^C 䔓 0G#LƠ.3ޝ#5'UҞXӳ~Jn V>Kp%i\oM\Ji<u!`'+\π6 rfv~NY8=,,ٌK[eq)C8 y^N3z7+1O}f1oYKI(rypqd_ H3cخzq{^m{h8#%b5ȪﴜKc𨛑\sdd~~O3OAfun'1vQ \:f-wmb>c ( "DG9'{<㈉-"fgAhDK"x~.27 UvX(UQ?+ /7Ï/d*R=g=H0%Sމ*L ԎM8V T-'#Xl#kM8Vim &a8FMySTB0bZ\Y|'_ܚ'OI@ٳ O.W"E5A(g.#4#R}I:2;IZxKtgS j,NSrng;y`-l1{yLZ~Pr?kxH_gם[Lx3iͤ7y3)_EggP ޓE:Kc0:7@޹Ձ> 1GRԼd^yjxߌaڙ̦w鿽|geX7ju) j.< < MEle [N1"ʉVXAH xuBDD~cS )T|{KE}(e oYTd90sH"%B'q+% FQq*r#l164k$Q섛yxyr% ln~|r6~pQ WT &iD˿yZTcIe@,H#U$*2ݒmnvǟr H@e.f(NFU~kZgb: LQRFMzfZ$SM*ے&EϚ-*ZR+pA'WƺߦffU)*b7ˮgW +ͶasqpUp4 J%}C\9m˻1&Da`yna_8JcZQDWMNӫ@;o ^'wjϕH3힮SʄZ* yJaY;^8HzaCaz6!"Q&B-#@Đ^K,i)@yn o'^" tϊ۬A<+@Pu>n*=--w}ћyv]m>닒l<E9>%"UnJE)Gy {9/ʰ )8/ˑ@?hn_ra i@v[ވ察]N:S*&Z9a?w|0 QтS51&$I&:f(akA)8X3ayݹ\U;,b2Ķ&za2mƕ;ߒ+n s@i!qDېnCK@{-?R^]u^Z;Cq£[gG5H 1 FQdTBF2djGjOo,,*;!+;oUP,! cڐhmhiAY_c X-q#M,cj4!§>iJHDPe$TJjP{a`"ru;7]F3woإe@1r_h Ud$Q06< :W:&N$Z:EĨUe@Bt0{m+~^ZM_-|"e]݉x]ZIwoHa$DDYvKn߮^bP7p OoC577]0W`[K6&%k-|3+ՠ/%Ŋr./V%[Gi_*LS}?׼ۙ,מ0"Pv8%ڧNfFmTwF*HB2HLaA|Yazf-bHR2")G$E<\$ Oz@'ѥErF~m$ IY. ! ӀQPNpHG{Ҝ*/ ثw0e^lxrf[=lS&צjם`8xJ U*b~_w G-|ا_iFz0}`N5HoI ZoBxD ʢOs]'^c?.=r=!E5An0Zd836X  aB'e-8  /Zg|m2>qeX[>.JLn4 t,TMoU@6A(~:`(Qyaf0Hs( `',fh^B8D ټDx IJЄ̆vI}`dQb{b{tǸC0Xz[  )wrT_>s,^ 鈚Oo+[A½EN0Xop{]i,7w|h= VIܤfݎb7"U%ƑƾݼRc U[֋F2Uc"W3AmW9`ʱ.^|y[H˦RD^wK A|ek$㼸w2i a`UɞuP_JW?Gߪ"Bgu%9WKJnQh Vj?شҷ.CӠEoAx"1uG jH,sTÝZiGh<݈H*%- vKvVI2*C$/n+_>+ >M.ahZ򋛹7}WwVYp`h_]ڋg&#JX30V&YQ{qt/Nu`Ta_Tedq}st =Y]= wU@+~bNri0?jd~6>@8 |&O\ϓOBGБo%NQށ>VxDa4┞  ѮZի6>M%ڌ}24 ,z>(*ce|g1A$a@r"n_3|7 -+}1F&cIcjbCg+BHJs5vġ qʻ76{n=%|qdr(-q!߫#nZk'nVJW7]@lIVEi^Ĕ4VmLp.9p3lCc5/PcJ0!_i`F%' Ƕ━1\M$ i0Ɲ%*`؂3L\Q",0âdUUDK=MZD5:%@ga0wa8C0R]<9vҘ\0%mR ]ܰmkm.0!xK,s _GPGX2Q:s8Q||sp#^)Rô;[;/ S/_x'3Tz#{3nx5(sbhJ0璸0Tyf-1Ix۹JZG֏{FR,~Tyg䏳Ybuy;Eg*sDL- DmVM/bӋ0t~쬈Rk_Z݊U%bG5}7JpF-jęIcc`*D911'С$20{}bDqsQZ3:@p!#":JVYe, )ӌN ^1iO,6PjU7 blLb,SWTР8@btc@hX6 2aV̀i 8Vq@,jP!P.$- $h"*t+Wg{uiƊz߿yU],Eh4*a-*uNK9IVhB&f) Z͗uHp&~Ͱ;P3rhhf28 l hR(Mmi)=u֢!V$ʄnzekZ`lG֙$6†DjJwjֵԨ5j] Hr{j&t1.i*eȕFR&-SwT4lie*hraaE@KC2k2Ą9t@uBRʥd|NNHp-kQ?زkh_VD22J֯=0Ȉ *#'&q (3€0:1c2y $Qύg:3aI(be)t.$Qb Q{PR(S%QaPɡ,N83vOҨDttP(p1xx2*LȲt$r!:⌫g3Iђ'bQ$SnZֵQikBtE/WGHS;L8XIP({hA亃$VnPpRt#DaJY+;i4I30 {+)Yԁq&l2K,r;40LsQIV;g16:Tzt5)ZZ9'R=ߖj6D-x?j5kb\2|irm溨yiZэPo6d!ƭU:g  Yad XdC#R1gL^]D;^|B8ZlXwy-)ej &|WZ'k+4&@BE*}P(s j'? gveW?xy}2Y]Ypcv1L~YMn!isFr;v it:>;=>gZ}:'g0ԏ{OT遫ȍCtC^zu+7c!%99i(t8 6 0Zyf}zE):9]? K"U+Zrc{00`TkdCXU]ٵE^ϩzuU< XdA2xYy,ho<$8[؅qkofJ!G]߱dqIsTH,3w7c6R0NJ]%ݭnj{}f=Yrv\#X(BNn(<V+~G0vNg]Qd9yg x`ՉD:dxa,o`J͵Rcx£A >ɫQS=&@m.~|{^;_;rMvdeg#dd)K<ՙwz:5f''_T PeA ' kVU:)~DCLv4uODmhX t@ր Wvjяex.ੲ$` *$+"1}g%d99P?[d#*k.%/hɪHV hwLػE_H ݚrwa(6zA|Amn+_hKpC%|@:rY^4Įa^4hgv}ċ=0{E{ѬG.Ղ \\RF01{( r=N| FGeNhyR똈0R5yX@snnB0@+cġn\{0E)DHib!H0B1\nsb;hfYn[&vBnNIqy"QpI%34jcz*Z+7?V&V Cb5@z GU` T:$.wH&x(I~ڡ? ux dڨWg,po oYT>TIRzew) $c;y^=oއ}a +6s{N/n=}pƎZ0Pr뉉] X}ypzUo݉A_@܀p:w-j|h!с+B7ސÅϒqmr5xMz/=7ʿE~㦑 ]mѻ  ZmxS/ӵNjՑ/ӵv;]--4jmM IU~TijN3XFuܞ= (C3WAasAȳìσ;?EM?v۝f֫y?hIy8e5쨵 dcv.{ZZbxc30Ѩ !c!|)ё`/1~kHP Uk=}kNcq?x=ޏ1MEgk7yS$̻wmm0F9еa{9T"%I$ORMcR]ӄ^gIʚ`J2RgV]~9 e&P#dR\&D&g1L ]JȆd׾`lXbgwBiq? Ǻ@cLѰ~"#` 4#*Cr$НI-84G0„QU6J'B\Ӂk - 4GASVc*9ҟ*b$=Tm.*@h9c7 .v0wvuey6" ' mCqvu]oΧEGOovfl󍴓AA+uU xn ضRUex6ǟ:AHsv 1!Yr*&]Xnb-BIJcK#if,xqe]6XV݆,~b}Ub5Χd1YF&C&nNϫe2h"`_$4`CtlEkf=7eN8Isׅd<*ULȊ32X1'Yσo3cBllIs航V=h R{7gڼ|H7g&~~+h%Bcfv)νYkwSOpvZ||y蚪F_|)frbtKf-^ X!ϼmzrs. m'q K̓MO0Y:dv A Q'+o/NEqa=T$fAހr;DZ}&сrSks ?̈a}BR0еdqΤr§ŗ{f}&qIpթ]7mPiN10X%Xs&ʴ1v@3xKf)PG˸^>gҸڐIy)$ſF6A4&s4[{ouqCdQqwvy@Vfm7TS:;%S⹚i#ÝΛm]\ԕȑAĨѧEB;]K[g Z6Bp- 3L SK;f 5Xv67hOob@/CfLtU²Ixm8yMЈmoo4K.a:SݬT?ɴX].u3Io?}8|1 ~@lgyS}Ik 6!F}[Ɖ7^:m@ïZNڪL43sm{)޽qudS2s(ise Y c멧$t̷QeZ2{w;<M"Qd+V&)bj7^6A*aCQ檞 Ths7X#XC0ʚB9d58d5=&څzkNKeXZ0ΚA,`j Romx6{_u7=?93w#3wC4\2Nz+9=v3cj8`eYk3R'OAurH*k iig]NGs\ʤ#Nst1E]`$+#xPwt-)!ZT޽H[ ҾAv2STuc#/)s$zJan_W/vQəd="'RJ8`i$ϯ!Zp'k"{fJ8:u/@RmgJ \rߙwBS\ŗszN<`&9 K':r \UY=@}x7v=At7_Gj]+a_6GR|Q|^ݟw</:G_,OzK{:7ܪ4>}~{hfh^X],6ͦig=?z9 ޴J c?dkDM.ObUcvmkǾ׃9mfv {4Lv|dL xU=GҶey]윧5V6Э{K\޼.ƄK8.'gO mdVZIts\ URJqYo266j SM*]%d \Z6MA2q=K2}K hbܕb!0 1\gƸ'eÃmVc d/AlY^xwZ3LrHUՉ҉={f#\vInt \, `KQUxr 9g ddU+|s!sAق^Mk0=4@48ye /#Y^-si/8{,<j.¦:P t Fq-% qNyw ׊oo& ' `)y6oH#tzˢMHT3l]>bU*gvcVݕ:κ1`Nxjs-^z3@3#5@QbJFGeKke܍===*+].߫Jm9o9o y5j>])(T9Z1TYɾHFR-MGnӁSBv[U͙;" h8|u^Չ-_Yڑ:XnyrPr%l ݅&u #txVbfO{ QߌI0,YGI ,k{XI8cqU[eHAF[%h{7شnGz̦+Rv44\Mxv?/<YY$Oճ}x=z[Luq M}_# 1gVк[ZPVٚznAӭ"[Wl٪T\e#*XPDO3ى?o@+4Irќ/+)8 SފM4V=W>cR3AFM{\,?`l_eDT!zx]H'Z,{f̃Ӝ2=@8gãQl2m Ӫ|mt\IGl%p4s||=BTH?{x05XLO+#Q>*&w[١PD[37;%ikh~8G6`& Z11N1'y``M"Ѡ*x7qP >耹Y`M V0|%d/{Ll &r&杗qYVe"XRohS$ӺT~vZ8-HR {_I? {d\#RvKIZhw ]U<_'Ia@4J)P}$7`@(LF8Ws9ۼe/6 ѳj=E'!)z|C0sKv=N~g~_>M"5[QC^|q ^'\@Y320u4\I.4{f{>ЇlY>wZb)yRvI[?Aʹ:zvZ5 ρ' )w~lD3f{E7>~"e nTjIW_7׶=:4E#J 4;Y8`= uPfdjNMTޱgx|qa9:L}~@ ץޫa`$ oدҭ? +CI@VQ*ac0RSwrq(Ģ/zHЖq" Y`J3L-ZAeofnU6>| |}t8lZe5XJkD֚㇯cB?hx||˶9>8gVXVOG?}L$-XcX_ڄB2d]M`pDYJB(CHH 3-y&EM&m:IO*ĂVIpvAꐔJjk"T!(+ 8H~Z`Uw8{@S=Hg=,-Mux9̺ZV%LP󦿊GM' T7T沪CUeR$AW:TY?$VJ֜5ӲSSI9|>'8Q{W#-O6:ZQBO@G}R <ڷ\\rq+*7>&/&+VkLܺ&ƉޤO SSZ~QU"^/s"&eˋ+3EF́G/“M R{p!Z붚'RKOdѹ+ú9:,hvWG36$,wcac #$LOWuWׯSA4hҭ8%rx@"0>X0Pa^񙇹JpI($-K",m~QسsyDZp?m7dOvCq)A )\둆"f0U"EH-)2J%A&y'PAphuf$<)=1Z^%olyUuI,3ЯwbΒ!:K,!T@FMm-_}g1^'% h,YfE hֵV! r}G@q8*.D2 ӒI\/.x'ÿ֧Ss/>7g.ZQHr9L5& L<8bAw8& FƠ6kcx+[ʲN0IJ}3zoCfaUʌ}X_R8U]л?{n^M$N:qum{gӿj{`hÝ7kihQ4sw;k.~֫7{}ho{gwk| o6~z}Ov0' |F5>zw8x>n~{ /ik#ѯ+ <Vpc9v{J b@ZAWy2ŧ0!||W3D(F9f6;}ƤEid$%E)K1=wְC3&؊2qڶ?X;8]{'}n96u @m˖'L}'f{iZ^._so.6ý_°beå/FMպ@+ko0ljIO5扒WG97R9a/{鶇m s~~|Dctxyfm/.'0Ns.~pMpu&[7/}X^?f ӭO=PK5_6=p3?o_ӓrF.џ>ou.ǔ89胬+v0!;ݮHK|KF(-1#up,BPWbqt|d4]F<_Ns|]ޑqFy ȳe9E(^T9G3l5@lif TArTVNsV—Q+lP5-DGA!+z#C4d(kV''p4V jΕKҿ]ucrqꍾ]Ŝw^a FRIB@Ls@ﻝ)31fɰM YѰZ5EA\=0|2|#dF)Cq3KYr̒fVt,5Lj:&/Ƶ~90H1d1`K3Ƹp^qK9K5z0AaB mTMȴāژDrB i/QQT`(;J掊`츈ҤT#m1:(y-$9O!2 ۩zL/xybcTО:pfsPa b4NDX°Iaa156dcK66ؘSccbY[Ke+?yi)IgrGxWCDONVȥ̐Zbp8^#=:2ڣ FZpCB#ðc=$.yty, N޷ H:RZ>b݆7lH ܯ=e@ "QB Ą 9B;e@pjPx{ j=ܖ~O)Q^pj.Zy%YJ/]~)$ E~}XL|ZJa,}Ԁl%Td0 rA4 c %fE'G`Ta&SܵLY=x$U[mgRi# 9K+ٯ@,^hedGZE԰"ibاoĂWI55 J QgRKlɑW ]cβqp28R R*L,sSF 5{,OXsZ@րJӃkl'QB\+n u<1eFTf:ED zYkc*t}+6YЉoՀzy )Tq{Gw0w:\صpzd'nT@3\y띜0V^!|&r׭^;'ދJ @M[_E+ФGR*wʻ䈪 v0c,:rc01)%H. %,Lw(ѨhshiڵZZ*r,{inw8?zz`ocsؿ7qۡClص7̏?HiZ[ ;i zZ /"os9-O¹pTL T=rw=S%$?x._Ԁ_pGKBb"A7~${eDU'l mmml;?l;ܪG~7VlKpqJB)3 qX=";*EW YP27QVOsCw^cF(J)4֘V5Ʉhq!jP? M^L'o+J[}oo?ϕh6cL2n4C1OQ#09+S۳2z{{ [jrg킶e[-4R)Y-@` ^Q;.JA.UZrÐPBE]]o\G+v1{*" a1.yZ,i+Nߗ%DZj[-ٺ .Es*xh>BnS^,(bqprQ,bQ,bQ,nNN`j 0wܨة''즜|NWWw}bc=s|= C(^Pn ~:6ՈrmbQjqy(sE!KuY'_k)0Vv+D+ aUus$p] x閯۔dEZ$ xC0C0C7' 5UCEz~,յDMr0 8{7w;Y6h}`ehs_v){ {;S\(q$eA A[ߟL@gŗ*`F2 "ol@qxK#]p~{O.s(;~jAU;wתX82G {e߽ɦK*lDTgE@Z$ Ϟ." -" - HW!چ1ǎӻO֊H t&D:EA8dx7+"mOBo7DHFw࿁&=\Ck=+dc0TN|D#6f"VrAr]7BeDw`w4g0 ^b0 c0 c0n8e-lj+Lzw^LQ&L)A{B€>粹/p^Apѷp~]s}zȫϹ|S$X\%H$ po֡P||6^,Pۅޫ[OjPۅ.v7Gm/%ROϞZ^S8io<9Gk7QnڍpHtKݛV>D u(jP,RiA"tudCwga@ O 宆- ha@ AtYKRğO2v'BZo.-ҠMdVy/=^^#G rdȐ,ά|}T+C ѢGT8.8pMnR]Kz0 ʱUanVX  \}bC>}0 .Lpa 9&@?ʆJMqz~;L +M'&&n$ymG]~=;^z"tY 5͋qZ &`Vj;O߉MͽXfa O,}.,va ]XͱKdvY9 W/:jA4tL=pFp27v+fz<|_h[ WW ^6f`Sbp 0ΈN9ۛjj>dBqެڅ'Zok^T\hBkZW(8ɛ_R[XH$ 4 KP)PHm7VKUT[ NP.Rhg5!n7wӓ> 5;bw9G}i˴VgN#ibHQ$Syv v2 o?~-}5}g+/FlMs̔#wwV߾{衲)%L>k0epmȃ@2a4{)WKg^B]0igFS.^\JAm5!@J|,y ᒅ꿛M?<u7oe9%//bWJ|KMpq"iHZDs':A ΃%yNP@u x)8۲>H}ESHqSy]/W6d''p%lL}aX^TO⳺b\=^VN%F#8Ǹ$*qq6Hffd }FB-Ega d'Z"MOqx6ꤥe`t9%%8N5$ ܔ|(D~Zmu,R9vF?wnNlez:+_Zغ݊ -ajՖbEG`+ÞըH<ꇮ_c(Knje%1\-H8 AS&*@4a]a!+%n\>wݠTas >(Y79Ȧe!މN2bB(1IGFB zgm9:F#%]7&:A`Ŏ!Aioal' ({Δ`]W(G [nדgq,ATczQƸOؾVRfƥ4~Js,t b;$W gڃϮ䆶%r` B-cCPwD+s))^aݧ/s>_Dc8Q}.ZWZTk޸[@ d_2Y}E+lE.;昫\ r"xǐ v-l ":zlkŲvJlD_ -FE(4#XL^ BXn;yHD;IA"9X~UX]K4.-Sdk ROa0Uݱ%Vcj Nxi [o?'Ŵ>ׯ5n%W[?ϫj@Sf 1de)t$4mVȤ*8fC}uq(f,@9ը&)&I靘w [/mc'eX$FN6qB2Pͱ4C5-)>DT%;!(ղS!o뢒d$ /5PRGn cdz4*$PSTSd:%j%G0BHOs% HY!4_ CT LI68qQ'U;ZO} ӹ*QyRRpz N2ʃa'6Vdu/h"PTR\qA+qa֨|-)IK$ MqCUe-zKJZ ؒ ID?lY-`H՜J.EҚHˑ1KNgdX9WacUKr&KԔXeeܠ|Q\ =; k!@h?y;5ve1 ”1H~&2bVrj0Bm-90̜-j9J̉랬aj1%u+athKy1͵aKpQJʺo@J ^ei.%؞Uyr޲E H)4 MOgg;ڎz8|t4s. #agӣMA|{4Z><-a9r$7h|TBM_>z ag!x8bq27߬|?{vb:mV?YÑhH|~If&ppBQL睴bf&xЮ"3Gf䖄S.DQn.V(KY.2|J/vLi$ ^<)p$4 qe2TfM$zc_6`upR:όMS,ybā*MxיRx Kt``܀kiW/'z: 6%(-p-j+pLG ;T"y~LQLâf߱%r 砙br2ùaNiWzœS 9Aȃ˸OZ*ot$&kKW=]ezҵr4ͻWHw{Z *l*"IOazj{ĠËH/D1x=#"InhѰ_v,0b,lVа04hְܳ6&`HJ-obUٙvָVƴB뭟rRas޲7q*3fJ5Ta_BJ#/yמ ;3ؾ(֨n@[8K O]v{JV3t>K/bOFzn E m">+uh"?v2{|f![n?=x:^fq #Fea>`ڜa=$ Gs, vp?QKVfJڴ/:D;v^J9 7,M:H*1τ༦⠴ sG}(υ|<2eqVtUk!ZĤyá0/x&j*LT7#Xfڔ,4cL*j̧<0\&3@sj&<ÊFL=tn^.Χ v갥=' xbr)L Deyhw6u`pغŖF;O4J5-@"ѢM؆s#i]/k`d5UyVRKq7tʋ}qxXTҺ6HAI5NGanq R\fEa*4ڇ8 --[QWZYOQc8btg8d/5gmcIULnp]9g Lpa\clz_^%HeܠlQj4vę,3 >r:zDsB 4)+83Bn #&(TKFR L)Q| 4H`Ɣl۲E9:T)TJKK0kl81!= Ϲq 8G((|,r wF0J0^L1 ]%]#ˁޏD7!Dc':-sݧ!5\%xf Dk(LPGv 7 گw-[Kp$-G^Fr%}t^nAs me ǓpH>C @ *ͪss6*ƻ/=> JX)a ձP`DkrnAh%bh2$A2K=,؍F0jrCo٢8@YF&q2NJj4~o(ڭy; [Sܨ5GJe0XZQ42+PGUݤm% "L M,6?XT `tc 0Zd+0.6pgS5AaaOۉk;i9 x̧z/CO33KXU&ȍfy*a :ǎRkE'!+G݇G M?__ћ?QO p8 InŲ M#pd1/SP]̟B0swI>7Wv 6 m0MkwOϯ$@"1@ͧls+ !{B-,?ne蟖KĶ|4 +G8v,ɝ~0ϳק׏-fy V'mt2i&[Yx ˇs, OB̫pѾ@MZe]ꩩ{f& 8Ů,KsSr.[㷬t^V)\] ;REf/wa.cu `*.~=>ep -%fge<-E*%€:p,t]zd1k_Ekj H߲9Eszt&".'qdRR(KzRMbiGɠ;]7DEp}eca#-c{)-4VbpRR.tl8ZpŒ [˴턐ydJ'7VW<2AS-yH90~P\\SYT_Gp*;o̘HFŗ"7q&ix\iy/ _P~Ai}i^![牗p\R rE8 ]5(w9uuPN5B2q??Hh?<Bl 5FM_qS2D|Aq ׿,wàolUQ9.ܺqfCZȺ%ekZY>!ƴ@2*FKȁZ"rlv e)Km*kZxq YyGJB0 s~Os&C0D TPNSFCӼ=6hJAkk1 I#2NK5ȃL$&u6Q.0j Y@{1Q8{wUԾ^dN/77Q9&``7yGJ=VI4-=E9Max:ONL=I|!eGyI i Ƒmo8!E/H.E~*@ɍV6D-&0 %g#냮İ7|?qɓ3I%GgqYs}E")y#18k|]1{{ vB iN EQRU5P%*7sk~tqfnޑ Ǹm_dI|3N :zаq ˯5ņtpi:^N"! a(]9\^ɻ觴҇=M@PT[X&OwcT`ݟAk7oYX,G]M*iI5R_o@Vtԥk!dOduݰ;84UBMw3gn<]⢍jo#1Ը`4 ɹO\n0i\ӢPfr.Y]W~Fz̈܂<Q5@gj/V& h]WDWk'qE$}FK4 Ӈ/*\QtDw[V/"Hڣx?hz* _5yfޮvk"Dri,_"?Ϟ£Kە+UŃi Q\w8 tW2X %֦#lf%[ӵl#UѠ_"Zw՟]"v8p-; ۦ+8DpM 휳K_ ? om۳Y> $F9IqRA'ϙ3̎cO19H0 !\ RmM@HXULJF;>.%RIIGU;@aN& t=A++UVB*7I@Nwٳ, ƕ0|PI$4d)i9^Je+zLUv&U@Q&VX‚qirS"^Pjk*4~K*O`ŋ:PDWDQlk"J uvuH;(LR"yߛ_EތX-寊S` r򘠏ئb8^y(}ZmUoiǡRMK(÷❥.Ӌ-2 9 (G33 93ѕTHﬔx/`<(BUwރ=yLۉo^g5ネM)QU$DTJ*h>KXgDw4'm;0[JL?l]D.u0V}B.SHMQAwuqJLeﭺ@ [o*Uٽre@f45cN*}dqECyi9h4O I]&r8qT ÕΑ\j,w9,2E3ԸErHWrCx"-W;nRQq)0~$D "BjACJVF{*en6cm,~]J<3V3DmfS𿮲gM 7we.*ogeq+`ST 9f ͂j+& ($3Y(=]H7빼.U.SRƻ9c)磻TH!|ōmmɡˆح22ҋM3R; +դ߄Öw 1A*i !R(`D[1[n T66fv a_4nP!D֛ۖ"B1YmJZ Dayԧ^'Q-Mw FuRiqq2r M劚]!9gPk>EF3:c|{ ٿz'Sh}~>RY *iSdCpݧ}.q'ٻ'Ve}Ne b؛-'5HՇ.qRFU@kUC=kz:wUJ֋N0ĴVp8Ⱦ:8iًj)jөSGQ Z ŠEmmQ>ZwWכMU"ZcQgQ k Y䳎LKѲKLVWP^vn>\@tjkgVe]e$F@DR.>ajKl'{[Ye9FRO6E~UUCfSiK1L`EXTSL{%*7jՀi˵ttZJ >BΆ‰4L@YȜH5!a w PaDm Zs^k0Bںq&;ZΘU!b~V3KZ&qr嬂(U*G !@+ :^Kcl(>)hZz:8uf:͹@Ȟ y}J9?t8FN`O!WUO&*^bz֙B%]FG@濧Qְ#w| BfvP2A &w {ِP8N~-0b{tGݯ@HusW-Ec/E"T|/BvːQ9ǑЏ"i 2%g)5y:9-8nr`/FY cgl/936eNi.,s\ pNjY\IȅB5@ί%r?Ο@lh M~#omn`!1k~HNg~}O*^oùho ly$h)5KX/TɆ[Tp~.$!K@i](SXGY'MV8!xiwITGwB  xUA1~D Dߦjl1ԖݼxX^nd|eZm~f2 4^hUn^t~i-[9CY7t'GKvuǛ/h/~)nc"fF`k$h>Z7A ڪYNku,PP"Z JeH`e9J{-٭-͞z% =}i["?r7??%tB<65~Mk<.[BkY+yGbmb5ț6\A&T%2&%hUXQ…7:+Tj% !O[p3 B˸zp_ӧp<-!hz,DN^.oV"]HjPn}E '͡4Y"[ȶPcBƺCF P\9?qkkä́(DNЗoɖ @v@#P~BHWuɑE?t8:+NJK`X5̞nЪԾ>xZ^*y4ݣ\OAKֱ]Gz z6'#QG`EU<*c*)@s\JV20U9ηu~35B]C4rt4,Ҝ5\i 'M\J0f?\t7] U+!R Mv w1ɮs7y;;\Eήo>(yb%,˱ǿ=~]zWyF4_^/SH,i:]^}>&:kr]/`+wY_m;E^dl5],k&0C Tն\KNw,ɎGXHE()<^g`xT/l4JÝT8 Q4@ Z; r},Voaq Ay89AwtӶcDZqUk|H Jo2Fog=z|^oaCiS.ovqo`7SW^c2pUI_5/%DX])G$y & w^o ~pVnFPwq/ΪkqQG&[?k&i2W{q'{1 3[#Uf o1#v駼zq2~JK9P3N,ڛ@rO@e;35\|u9L޵BCx|פּ76od 2!k.P7whZFĚՏ<]fuVXf`g#r@)u 2e{V 6pxHQi;D͆{u@ wCN!lT%L:CRkWO8 /M;r 0\>\d`u3x?#.\x"cr`7WCK~~p?ԛI;.6vA,>]eݮ *̔>0kn;!,zߩehЎ V8ŏjV`j=t [摯SP8s @1)&xłl6fh5N%·]xR O2O(sa@r+~fp,ʫUFjF)'m!*O( a܏H ?}<\I.3271H9$Ju<.oa/}#KER5|жO}byO8+(\cVh 0,M:-f3dn-6fz0xffImљm 2 Dy̼ZW3FdU/m+JZiD@G헳ӣ./.uGs :TLHΞܿٛO^]᝗/_Yϫ?Wƣ  ,Zβvb8}X$PjX瑓5xYj<S/K%j)k nHΨiغڡnD=S`E^Vޡ6,R]ءjߡjߡ%Q{4POx]ri{:6uBCISrx2mGMLx:m_\2YE3byKk;DSPM-Bg&S݇>NVWЌOW c?]a짫z짢F`z<3iix$m[E%q3˄!)cN ) ܹ &wlbkKGIE`Me޼}jcUہH?rÈ1<vBypȊ8!s71Bφú?vel7*$VDȵ))/l ژ.ڙ>gkwJlAI6o5A_֦Ӽ-E-Nwd1㽌?wq &mroAMjvWj19=Kmr"&עKa_ޝ'ie0ϧv@ ˑK8eC]2{9=Fl^ <{[͗ f>Or/G~!ru96D^SwLh*\`{<&}Ŝ,r\qU:\ mC$"'%?cBTO[* 7=o!Oơg?/nAd{{CB?OI4 P_ Q{LGb|B>POw8[?_+Bcad(Ѕ19H"WG4!=Tܭ/"Bq/N 12T sRvK„CE_ϧr1gT yV^W: (NOoפ=&: K%mFOߩYz\l>Er o.*^i_] o)1nF6,mJs(:e@I w+~>z ,YPh$;_(0p8a`s.Gx)\\0jUeCN QpD~p|ءr(ɮ{;C;vYqtv@qtݶ[!laL`~0ڔobU8pR竕B @#M312<SlUQ*4*~܊x w +꫷w`zw`z]`ڡ~'=̀g0<0 StK>^iR3w`z&ͩj`1.aP\c j= sՎEHH@>?]N;1@O+A`u}uYP)tqp&J> ϩQ?mc5h&#|܄ asK¤|g k-01_)iЧBRe P.483O ڃR6(K /44'\Rqj3:7fˠ-(\ ڣW z̥*ŸP.(*[2% bl609ò|d(s#sqߡX!ߟ?~Sq_U 78/62E=:x*5@X)-/fXyJg* cNhOim}Qw>CN$cE+rlz "u7eM=3C憞tؐ5`f2hȵlc}կN~&RMZIf2ת(hI45fh)O>HCq򬥽i}QKrW=[vGjעn qH b},ꥠ.&͡~Nz~GŇ`%ow{y3voŻQEzuc/2MRXi%mFe.λ븟'Ωށ*j匶,ζg%vC#5ڬ?6ROHvms0Rѫϵ4~atqʮ-ԇh< '}>/LC|LΌ YYUf ilK:p>F46Mf:-Ƣz;4%fǻɍ-4[[LrE鐍u`~lXR9VCX-ˤEZ^kbQr㸾!MP9v膦i\ZIS%f۞JḾ OZ4% b؛ػ޸n$W,vK2 2l&O%DI^Lf1}W9XN<ůu!UVXdEb}2ћ^iy7JMXEz/M6cJ rBa=Cw2t>%zޣYv䁎^CY.Ӳ]^~߿ۧ{>#~ٟJ4Rrʿx&*Au0 nFi|Ʈ fp4*7U ;L+mbӭ/4ZҎE  d, Q4[P,$Z! -rB==Đe{nڬc,RYKR2#l1lkPzeOG0 N'(,{h5E"hAbPqbϦ8U$ A82"n#˘|NFImT L1 Mͦ9*&DN1 f/7evX;安''ˌ,əp[=5;q2 <wW_3c}x1\x,]z]y5/6puYVmJhfG"ȓoK{l-#1yw@_zJzJUア+/~c t| yRq/!=1q{ #18Gxfa;2/px)#7>˻u1_]E׍\T9`!RbHQHRho|Lg}!dc(Bb6,)xd" 9X祉.υ2O7t=G ۗ"eq8ǮM3__*syy݇߈!Hnza\v،rnxSX+(sۀ#H<.]o<44#2s z8}'`k;(*%xɎA(҆%uSOI)fiygMx,~;-VX|g`w[8(~Zbb~|)Ys{qB8`)ڇ]Wv/-c,x 7x6anˏTxc6#N(_|Bc9B<#ɍ.}dxL W;0r^=S\6>/%u=MC*P6=|սD8!..#[=na]/|NLL@k:`7+>ϟ>n'c=8ZKTg [` _r=K rN÷i=I_6rgB8xAVDй੣Ӑr峕7Vjqx0;~g޽an[0#V;`c7 c~[>TًONK#`THtzoX?}pn'cRܧBCfڿh]FOB# =3Dd=]I@:|D)n`#ԃtnWɋ>Bkd (+?n$ʢ*;~{-B5/S^ | .tzsޜ=7gϗo.F7ALPK>&鵵9fa>9ſ_7)שrs^8&g5wnj%< yvw^:s|o1BD7Qu1^kpP6.g4Gwo"[rM^S!kte,t6ظz#s2+Qy^kܹq^:ޱ7^.;M|! .ia\m(1"2 "+zCD5q.*`A?|yfZ/]X.b1;*;ԯ'z{RSmJSΓ9@@ 1Lv(%YatGwHډTk*Fʪ)PWWYD%WgSdV&M R*q0L%{ M]m<2֢TѾm9#Iw51m$*F]µm*-#bgRl֧:G'0II{CJ4E*}1qy煗7:)}qN> vDGeQ)#}A`)ww0:eQ%`Mz$i^Ȑ%TX6Sj0Nڈ@v9 %CqGɤgCïn) 1@f̈́xUɹ!z5^JI`7Ck!bGw uҪ%kM ^B+a$J@&>P{[{)!g:Yr2iKHɟ;1 P9+7Lh, BNRG9N܁z S׀4I&O}ȟnۻ7]bm %ǎ&X;ς R$NE{g#GlfAoH(T Fh,BTyaf{{֖Oo -Z'hvr&AףqY}/6 4~ E#6C{&Tگ[e_W(rCG;7GNN,9JEfb\vb`C8eEwl̦X]ة`|0b,{cU%g-^XvxX{.(rC]AΒֽOr/ӃXS뱚KDh Y))ڰh2zļfx%Vp1 `0J)̤6wFKY#6w8U$-zig8QLцՄ%KV8@}d n X-zl5+XF[rs7fR!ԮnBnT@UbcRm{.mMrnKNex^g 6tf/ ަdrh%IAT^&I!ڔ*%gl |am1n,ΎҚ3kg)vXP.Â^q'$6w"9-)70*"M5 ZmIlr9[1sbeɎ4RKšp_D`+McOsw^SPWy,fj轍7nЩޘo !%]BٟLH "wȦT k˛~o0a#!()[H2FKŠZZXnŐkְ hG rb~؇emxv,[1G >&涻Z| ;_q`oz~A$l>`?z`0ױEPWj~;  8YEI?N釬9T7uzBGkqv<&k4{-'k `NE'>s?.F\21_/ږ4./VmdPKŚ:X-5%JCwz ?@gLh'kv%#jF!32@B*9vݕǐy 9 cXuݟu^_.P`^Pe(D C6%/e H\<WUnk64ޑݷȧ.~o?vMVlfDJY~\Ӵ{T4CYzfpEuU8'-iCoG9LVGYokkrV5/{Rjpm>l줲ge'9/vMh9&vN*}RDJxe1IŞ4n4n",9ط[}c[L)cD%#挔8 PBJ̙h̐H3SIm=D F&]ɴ*"Kx^| .)(3zRjjdDAC˾W#-0!)FKz'#Fsfmb,1E]Ц>&K!e-cnVrS[@ :;6]v'lf.WThH[8ٺD(1TZadP`, eB:e $VHHdҸz!E 9ՂD -eI̱Q\ï&ZV-rTm:.۸KYS}]O*͉=3͸rղuw[2߿}|^֮_mx7XBEvv7 I IjOdr{W~=`ƸdǻQY\(\'@ Nb{jCl.X qk.U3A!`vBF3~hMkY,Y筵9(ϩ[ɥJC-MW z5ݸtEdPkX~7T7wvGas饎*ytj3|)oZcvk{LܻDw¸d>C o MFy3ja5Xi i'G~nS4=O:} /<-<2*V+9q43)q{tKG#~<>Sͷ^6`dQ D[0jkOff|fB%DVQ[>!cnkz4U8 @qԙr1-y+KU~o~iLFLT[K2w,R_h}7с`Sw{:jۣ78]M}kW$toN^]E޺q'W!̱YC#HzCuJ .C w]?D]5rF0Fm(##0L:j61$z P6F8먱N 8rUAB {N*4~,Y_gOB *A:NsG.siP1]1X=zPLd.Y|D`aCH'^,3h=5vc_/=nĚ9.Q eIv+7XkEĝG".8θſӊȐ(K`FcҦ섰"9֌*q;s eR"$PH#1ߎO;Vh5v0U^/LkWyn ]"V_T9Ȁ+߿g >iXKv0کְrφ ͏͇CMfC`KPt<3mȼjbvjP$#Tį\Bm]rR(@ݓV/|+[lA[6}uUpmyǁeksl\k[^?vf ѵl ʗBɲrۦza5f|0mf?"܌Z8\YMfyi|lz0^Zmɗ1ɌW$UBֽ[wf?ȇ'|%8+IQrnKS!UsV͵NHRBy^f~(KgN|R&@W@kA"!*A!78,"1 8"12ajKPr7 O#Aarƭ혶J[-菻M&U(19d@0v-*H0 Qƛx|/4JAmQ뼉sxS3]Xx)ug(S&7^ } 1;]VX~Q%)P?w4HzC05j'A3޻4ik͸|T44Kh11 h69#MO|<-xgwxkcD*:`, "f_b$dFc!'qHp v=S_^LdF^݉^xM!f'Kb`fk.9'xǵ'?]UiV^Hʐ2w -Cz/CikJ!qĤBcDG"qV(L@X*;t݉jtOi{}7/һ=OOb(q C͇.0}@r'py@ ji~KIT#5¬¡%TL@+\q ^;[!BS !1!bL@H)*KXkb seBmҸ0%Û(_9ė/%yGR8 e[ykNpcҴ5~w0b-Wy#!5k'KqɭůR= 窜j8]XHhPVښsU\y7qԁi&xkHwsZC2yV֠=c>B =)I/ae]պ!@۝я3G?`jyZixD<-W)vc;w6NWMJXt|,&;74QEe6~v:ޔ|MWu%*(c}#]{tGϦi. eWs6X~2PCR&hmJk#p 9"b6&][BǢhz gz㸑_%؇y))q}v$晑E)FRϥ5ˌ5bK,~U*u!3{9 FL̬LL\͙ 0J$޳ZL[yCx#8b+WzɲVŵ^, WYPZSruJ,^t\ךﶬdzOl$Nh™= ڥ9 mb,Q$̕T4:A8E#B ASzip it5?iĐӯv #T:@@ZVF}!l뜑6hD@_/Nϲx0Ʒ02 *GyR?ݎ*~ !a<?!Rëk]# fiN_,}j+9pqЌ7B0e0Lz]iG}C3҈b!CkWI%oҠ K~,\;ü~L=O )a<%̞fo@t[HVrtinH%= 'Ɍf*0]'a4c&|}U|@g g QPRüs2A_L.΃,)lkj"F{շ U kK HUtUI<&ӻI},cɼx;;x{櫝toCIϺ=z{w>1% e"wgfK6]p&^A,0B[ ֢|ڬ]AOfjq2~{ uI24ޞòZhRX9wThrƟ*/z6Vj  (-7O{Ḿ( }G,Ym;gFWhͻS^VdN 4!btQA DօRFJ 3JoqDcV !HM9/K!ˢ벮ug$Ð7i8h)c6XT/*U"H":O)%j*;/{A{Oޝl cϟ|Y8)Е6:ʱTHa<3,B0ƣkfrhaPGe{,8FJ#Evne.Zp%Du|W3jGq.>/4碯<+usp`|/hr[̤D4_* [X3d i;d7OwI AI\E @^u>=f_Z,,<"X wu6jx,Xg⊂,g-kcyk*5gܗ/4 Zװ^FƷ8&D*fuJpWHrhV߅&h%-j1,+uG]יO)!)WWWd$AY4Ij Mzu*yJ^穒yWh2 Ѥ͚#+ %Ӡ)\g"7F[RZD?.hRׁ:M:x|sY> zDLY'UN iOK4z(ɭʱ\$z{!9E^Br^3lyn4kNuCv bA4m*]@0#9AxGaۆs/$ fur@Ep:n"9>"c>>k Պ@k5Y0CfbCאnji7RP6^s]OK6Ws`Qsjl=Gw`9e3{Rhˍhft+xr2Qw_wZAn@ m#?riBM&zkev{QsGePpbdHH8%{dC!!Gw}Gǧo/i^)&&h :ML=K 頂 Rgb*J.(#Jҗ+nK5.HVN.oҗ~=)V w]ndݟoܳ,9 :w 4"<.C;c637-&/P4!Ļ:MCGʟD5aCV:HSnjZPJ DjbɥBIon eUfR4:'H: +4B Jp^-RH~E/gթԲtFŔ/I}ȔtiZGz˪ jp|ZҢҘtbHaLI + G/wA9[}{R{G槗ٛ7D*)qRxĨ 吺ܜ nmjH#,5txXčΦۯҕ>{f"MO)ޞ֭{rBs-d89oO>%KX9%Ksxy!Yu~>,)_1PfnR:d-ٯv7Ǯ^{7vuB\fn:1^-pul`Q],"S5؍E2&: \zfR5[C7RBԊuABԂ`;P֘=t벍$S™6Gro0S1SfR]䁀|8˫d>>Tq)3-jVq(Rhof dX;hclγ}r QW׻?HFHʂ9 hU&Kah1ݦ_14HGl/HvMlcC\Yfa5L՗2ȜQ@EYX7qk" RCv1t"5$/Z}pvy銌RCZpibК e/!:nBnߥnξ[~l52~_w?0=<ȴǣ3k1LwLZllzx|2QL  ;o^i| mK8 8%D97{v ns Z0OҘCi~wzM>g?#0lO7t~Q' v ͗ݜF9u#wFJ%!fs ˃ah邒1iNi#ipׂv[@ g3C+Hqat2j+#IiczJ1Q$tAАzΊjF]A Pՙ4U͗zOug )$@%狒1 H|JkQH#KCUK5QAQpqa*%AKvr`AjW:jԅixV@GA4`<"%KSpR⡜^^Z`>l_roOSZT&U߹ϋD߹] OnZ8zcvJC~L_䆗JPvZ{:E6 ;E-RK ;,QFUvֿOAqɀ ?:DZg'?`aG&⢍)7)A ѩk-snz)E?v9I%p1IƳ7piVI8E-476l.Re?"S%9q~[!L^>+g6 @P+ujL~zj4WcNpDZZX<+};ail7@,CW6K(KQWo ʗҖue knv (T:Zn|\UV.A4 |Z iJド uĕI`TQj%(UGl9=NRFDXX p=Yѩu\-!VKY %gRc dݐZk@9A+(cq;EhDwe=rIe :ȣ?x%1 < IT]O$hvٕI2YF G_ddddJ*@d_2U$܈akN: /yi!Eƕhj1xYA46(sG/ 92KV~KUQ2#krxDQ4;y@jy>$RVK>>I;6*|Q+V7|XEu ݻazq%1#{ 7B1Kw]e5zt0?.bUadm : 55%Z,YNJB"-eYd)YfiA q?Q?džщB [ڦuhi֞\sTC1?tjs]csyFt7|mrCN?vM #;/?>x ?!ix ?=:d G(3zxK.9p8%[4 y6R#[ݩGE+XFu&^Q)]Y5Oe7,Wbzqҝ #YĿ`K/p{my dz5- ]f2Ʒt52k|rC^~>i?}2Z*C#f5=̈́? 8yu;&,ۆwN"}ظ=|Oxgwr={0~ kOB f4~-f[1ݽSꈮVR4WPcF6vgɣ:o2#Kg7KDpiIU){5ztê#t oDWG8\HMN$.iyN@\?YZd4T}Zmly!HMr0 M{;H}}шrCZ19|R9pSRV2v".dA\LQ9+irRnzI7RGWjC3+b cd͝C]ɩZ"ʩkF,3xA,]њ4(ى dT\LNƨZcNM[aE6oc"Taꊋnk5ح̉ ·csxnj9awWDIEsE-lM?sᡂ1"=aGO1j<@442G28sF3d %׵s".MUMIPS9SSlSQJAkʰ]@3Hz0)Q M&Q4P=eoCQGIbKwq=͍3EUu\I4S6)judg#N8MrOO5?s7z]);\dYT% M%Cn?.j-K0]rtwi\~glpPrH(Kr.nROYd׺,4ni*(fu9Iԃ}zjbƅ!pR;jo{^ x=Yau]'|LpJ> j"}G}*hST;d2>!K=$MX M/x+` j~a{ 7? BElbpLI7HXA eء oXἛ^Cޟw t͢Tϻ/4 ]"]{ɔTlEQZO{1ni5TZ݇i}#>XLQddfuI 3E)yicr2 rffT.d-5N X%[k H%)8fҌ4<3 XʓP)rSZky"8z0)VQFm||XCiIwFUI_\"r/}Qҹ\EbSC%C$Wx f8 85I *D ʕ(n T2D.>LuNpT*I5k\eFnXn,jr }nǧۻŎb"K}YWwq&jfGR;wDv8ANn+M̙d~Y#/>B'p- _) ]9HX-xj.S"hQF\E+vp6I"O4IfܾaS'-qvny]9ު@1Φ{t/ڛ6LHq5`7EN;,:rۻ3j(> EE fb$'1+U']'jn$,d>%&Op%^r\q',:OR("IB8{3j2*J9QJ#(, '1TFE`ҖYJ̞zփtb塗lFck\{RTe;% ϟˈ2_-C,w?fsZ%)|H=phskԾMO4#5Q;bd$+Sd!, ?}򔕱d:6acWp'V u>B u ėҟ^r K RǶb'PMQ9{w7~[;ǬE[{ n\k]>831Uu4StpZ|^j3(xm^~LmY_a+r#E1܆> (,5tTt YT0̀ã23_*;b2B1#Fqc';zZ}9bj@Bh'Ahl^"$b2)SIdRf7VrIb%PZxnYaO^ 0:_WCUzxxEWLڙ@U +Js1oꝳ'[{7(7EȇgP1ʲt%ɿn??}IʹQiIeR%rdUZ jrⓊ[Z#RQui hVL(d0*2[(T58ZqU,IA\@Z*ڂ1E. Eb5qs+pL8[Q/Wɿ{==pgG* Zjsf/'^o蒲aZ]}gcr`@}p{)>P|v>S]f2yJS@*T-ǝAȜ CN}>D3Yk*ݞllH ?s M6 U{%z)J$轊z8k!|:$h<3['Z64J*6>kqG\)joة2dbWP[܁2 k ϖM1]ЦqUl:2 FE >VI%bzǯzg!~#T.ǯ5=|n?K{G JE/J|;w,8>< ]3{}gV {jw?;>A>"Nj*M~iCy  y4ֈ&rLMV*;2lWkd?uK8mZn *K Y;<7LE2$2K qR"\Ųz)7w2)#KP;S` a K ƿKc:OߗT@<P-x;Q:Wמ'cd\/e<fj!g]t$|Ί2  Ƃjst>~zc2.烥7Bׁ75DyOlBD(Bhfl^Kc<8q1К^m3ωxQ򱒦w769\UԺPݗg廇~G@wHGA2cJ*O@(slBzrТG<8)5nh$8HҞB I{m[ƔTUoWS< g[3 J #蝾#MB/A?sS&;#m%˞P#@]of (#\v #apV ~ $ x>* X =H CʇpuS^jJ#eW#Ã57agLݬFoQ3B4 yp7軾0>I45;@mžW>m 6+ltl"(QyB ?ٰsjՏkXs|*/s!sj;R/‘b$]c ^_>RQ{ λw~uyM )-lA첎9-gz٢dl΅@mp<_ ^`R2qP{exq%ic3|: h&;xo9?oZ Yß4TLprvDn) 8<LJݮʥy0Y2j2rwqs'|77&$Go;u(HL:t$Dz"XJ)ތcem" 1-E IYcR|Vx&a'N lT,*1j?L?7BٱŝV9f7 (06~6v7Dϳ;kQ{Wޗ?j8[.Td7Wez"T1n~`sQW+\l.lP> )R+͈R\ePTbƊI*`$lTbBs[_K/^VֵFЉ^.BpԺUl]yv{~<'Z3ͷo&-mlS5B)X(cJ'v?4K΄=`Y{CuzXY{d)+Ilgl/zmP-TK]R^>pCFٙOvhR<-i58ȍ|}Mxc&PmW]dZks5&*5O)2]_΂t=h&IVMf9)A!GQ,Y]@(ʥ*," ;:dI92itu΢V1Oeqc3كBv7=VZ1vHz[=9{ YK`bozBu.Mq _~SɌ0b aN<)]0RaQQLThlBEUz0TD9OLw vM9S> 7zGx8Gà^1 Bނ[LGB#7Y6oB_ H 1ux $g SZ),R_ ,NIc!4&$9-d@mK~ƲӅ++e૟'=Z9w="V )eј"P" IJ26s<qLz}0s3-O>63BN(IGYZQQ6tu3ѝ tL<%ΙmP㐆YiۛI3n_kds{YdG"yB2mϋNCc,Y\U@VfVbsΘKkʮWY_>pAB -Ai"j5KypRMii`9`Oi";uœker(HiOe|S?WWeY$kP)ޮw%L^$<'2}g>`dN]iY ّZ-0)Jb` `RX@Af OIK^j!1EHQA J)Q+&2jR*+w@`]c%OqoeUJIYOk)ˀIj:D#&v n"Ӽ-9UUG>Q%NT).ee6 RV!ȁBG.bQ} jTyr.Bik ^ w YUpOX[?ΚY:U5J+Y81 AHriڣGJGj"(<2as]zWh__vs{jiFR^/{XZ  yLig8Oex{+s*Fdr%J%Kl A*2 M{&" 1"UUTvFK5S7))-P*IE$\`p"&,Z$Ǩ ( An [m&H RCQd ;"AjɳjJD R[רbtFyuExQ^47'H.t$RPCt(FDX{&1Te߈xD|oBC Zo/7E}S+?IE#q%Ad؋CD,9>ոΫg-C'$C'ߔftH+"- `-ֆ!(r,`ϭlK@&֜XG#Pp胜Cjo2D2'g%I F)J^ #g#(dӾbOľF&!%h= I~90C $ø&n9 *<QlH,l|AǼIxkU::՛.\cD U|e?ؿ;GvhP #z(#Aڱe$:liTbe%C1ho#A 6d'YY +dا5/=r%R ]`VɘHIt jnf`.>M-n/a7'W=<׿WȰVZßh YGzǾ ŧ07$W7Llڿz*gB\͵˽͇{{_BZSʱw$yg~qlG~ tbԷ|YW "i=!V2I&D$dI>-VaS!*{Cp0| Gf&ύ"-󈛵}bV-V/^gje^_t}~:_ X<8wgڴYSX߶[eڕq+7lWlKw.d=lu?%,QVB*7@))l?k m'lqBx[Pj%b 6&Q)tK|7D=VIRI!4}%O?KulpҶ$LIRBZh 08 nay?m|<=P3a43yHNw)9n@ق`hXg *7 Y-n3mkSꅣxA-ǎ@vWm Ʊ//a]!r^$ٴז45kВez{יd} sU} s'T!ƿVk֐~yz} |UwgW)dTGig:.&]ʺfU}RVS37lLӮt9W9?oqyÇq f6"c8{lo[<5x5'VٙNVV˗g7_!O+v>7[?pcɐ>4q\Cbf5o&*%=Qbz(W7dqQ.v-|^'vo{; ˽5x0Ngoodwggf;I( }bo=A31CW7dqsu;]gH:Pk#}T8v~)I8@Qɜ"Yn,qDOpL&8ԭ6gV=[Z ZP3ɱ˂5JL"n!I7{媺mt)HbƘ) (]* > pYj@PyK Z[,٬BTNS `J4uՂ +i " HjyCL2i K0)WR•)#TgR*:#LX滚~n.{(cHy7O# wb@Yhw-h+5u/z8Yg`p=;3+&Q|~.άd.]⽏ qh49#v;3_ z yJ[.ЌIY{ƌyÌԌB5uzh,;X2iGlt@7SƧS/Yhw̱`URtrh/u++poHg +Bt4 :YWYBԂslWeَٙNXmW6GFϮaɘ C=~4z;{YIqABEYCъi[^ݐ9)_emFn JaQ0.֔CB3נM̩N-x i}ҔuD(/x <0G!7 `9lH#k5*Ph (j%1 O_,LdچKߧHOKkfK)szy-vZprW<3Z]MM} )f}c]sfm_.i:v1p7;M"}H/T@2 "Hѡ!Ow^40[ڀ[AF,@F[!8 Ad P3f8 ,UP=hKbӌ׀[ I9fY"9F!B ~Jj&m$0!,3ZY.M.W?OѽD[0wϏ=$ߟ_?$@'gJtB ,_'po_ SK7zח^b{,_~7W?>ӗ~|׏9tt8\|~='[~SmL繳B/Eng'EOY4x82-D)K@C3k7 4}[$6\,S܃KdkC#ado{~2K> 5x$q++ʵ%6I[ܘ05ӵ%9|`(Ogq8נ{cǷ } C&|h{w`9j؏?E ATX*|}>ԌI =o!X@ރ6=60z9yLeFx/pr}JHv t4~y$09~F7A8({9]og_O~ ~p͛ߧj/ujF -c)8m.sOhm$pZt-gz]t!6kqe?}8&73/!"l 9|iXWbwѩ"Ȁ6 .sgDUh[ ՎjagtN8漓|G#x7 M%xw ޝ-c _b {w(^M<|. ҂f̭\Y`MFHo^/e,yp rG].HcHsICGW{3rh`rÅ*[4֜hHjX]ح[)msUNG2=SucV pL1j&x\-FEgva)_>rƉ\!CӝRrT=]Z4-P`ǵL?eǷPKd4f4%a Ɔ!3$ːP̐omm$InaBk^Z]=R.h,l _О^FdKoE],[|]5F ͕0|n6f:v;d#&U_9ZQ3{P+*$of# ˨t6&~ٳm[Ark?=q ?W~W2JK5v[p)}g' ۋz bSrO؛[{;_5x8k 1_nC[<&PU%!ZpGy\w=M9!EI#ui?)G+KFsү WJDrKܚqlU9kNϴcEe%UVP;C\E*|5A%)jIJ)*Ei^+#J%UEs+XXUVE+ފSo\]*\@T]6h~D*<1Y`iNR!jT:g:r^akls(EX%N'P󙥕91/ju(D%r(Eh5Ki)^!*?5ճD]Sȅbx_W;x؇\N)e wZK9B#LF[T{6ɈB?pۄ]ng[g;ۚiA@ ժ9j}P1'=X^#_;$-D͚g=Q5HT*47=~*hKePooFEz V+yɑlDaR'Qkb"6α ?؉:4̃Bme,L9ܻvvzooCQZ1I^1' oTi7-AeC+,0pTLt)[9&2xڏ`}Zl[r{@cRQmexXtN:G]RghW$M߀U"VgOu q7U ;5iwFE ]Ss]V朗dO4Mb'7=I\ ~-1߼~{uJ8Gb?{{ 8*R"p:6K4mLݰ=K͔0 3!vv)$cQxJ2zIFbKuoOE ] +,ZY=f`0f3PC)H(mzx^PL Ƹ6J>[+3͢Q]YZm쨅Ǝ|M)Vu̡˽ \p!]5eMU[$ҧIl ].m16=n-kOC$Ս3Zƨ4*էaiJrW4Y}6N|HTctTLJq+^J^{mGJ K2M͔0}PVzg3SmԼEUT r~I~q^¨ MzWIw.AO @Ue{V- Za@;3fLk kE൶ \+x{9+ |bePfȝ/&ZP%F!';t2wx.}E:=߁hr:̎ϸ$7Iف3M~Z nL9P߷P/7 |Uw#*0}f4 &ArU(g>]r4_H)~%gyٟ˟,@3r9gHN~HAbjHK*J*AZ)I" ?+%un{t.6"nV4,ٲ}L% x'hZ˚qz*]E:*6ixĩRpE IE8BEg! ѡ2XsMrG+ lM͓3eHHUJEL|2q !ֻtUΕl u+N>x'^(}o77*nBi(BF>).Y`Xa# ]MiLskh;I͍!\-xM( 30Љ/t/$B|, ef"6Ϯ_mlR3ċR.x~}mII>ܥQ Fz@A9 hW)B%\.  k"@4t.+]W" Ne 7]%m+โ7UkX)%+F4Xl% "r"bΗ`L!n5xL W>R JԵWa-+S)5M%Ipu5LRBj:\|D>on'Y`lVt3,;>3o.Yo]賸k>OӾŒQ)V4ɽ.r{ vifp/Δ4=ޱiίbj rH^<歄Bޜ0hpkmO{EInERt::$X,U>ԕ v`+ \ҴppوE۞<áit >u D-rbGM9_<Zdl,zł m>xND5Gz{LCz~׬vˉ6xE&WjaLGS7'QȓSJIj\V]յw4q-~~E\>El%_˃2/<7 DKX"(,F.3L2:\bJ2.6B\^߯>܅4suNuV a4_Cr:p}n93aK,i!6i'-+ʌ"PGjD'({A>*=Nq& zWY_e]uUUW=$!\FƵ$^@cш dZבP+qZJ2M-ԴFu6&PE p~XTq\|e"~(-*tѕaQeq#fE>;JPξ6Đww8\,p ZYa~L46{_?qaY> Ws \0ynd$("JW&3TkA[ jr8 [0UZ^)όq/3x8Hs:P⌥@&tJnHb Jz^VkB ?7n^Stts\|7?2j2'{I+@ew#;|`(k;ܶL[ +ѓ}0׬CN wg[ܙlEDG:: &Fdt6ڻjW9)iehjg imk/ie˓"w3Dw{3mm9RRj6AH=RρP!Z1,Lp$8/ N颅t1B9\* 1h'p1`U2,WYd_eɾJvy (QR*dBP7|ȒFXx2iW lP#O`qL)ў_kOSP*1"b~$#QL"PԱ% J0ew*@ 8$&ibYlj0sN'聡J/1}I]t r!.;g:šQv7w~tzgYQPzۼYW<~z||pЫ,N˧/š);؜w˱"@$r'_>}Bb?ˮml%#Lx q??*||A;fJS^Q0JV3\^/cvoa ZIǒo}yqߕ}ȝ$QwnΌ03|FwKQ/JRz]X1Q[l*grBWy\+s) Y@fr;岃2c2 {qc@.;|kZYukmbGk#hd{  IA:K!u!bډmj/^1j,v,jj7'ƸUQJ4 D|<(L4BƔp0 0\ 0E5L2g hջ 6 /lW9nL; ONnE *2pK"uctu3dFSp3;C€Q@؍X_x맶5ՄV.-dO݋?*d)Ւ3zW7l=F5CPDbA(V@ ^DP8%܈8 ȢI4`sVˈ ĬeVrm`uJ[g'OA*To6oBhA+U;pto@ͷ\;۷l[sKׂS!BS#*퍘 >&D6UT1 rQ$'/Hp\-6clƍ奊dϣ`YS8L#=fa"ϯX3e!h#СE!nko2 .}w/[\Oh T~w?p)J|'4%*(ݤ{s~L"w]uAOWkwO+?eD$`G$ʢ?E2*)F/ArQ֬] z1?'2'/o={2$=_H\m?[*7$>pm?Q}ٲ0v[8{. )mk/M.XWI+{Zt`Ӵ-%wEXOY+\v>#JBR C߃_>RBlkXb%`bؓmG\Ơ*=*dCsB eUȬyLδu ` StFjvJ]g`]G׎F*9AǗ|˗QްJTѝeoށ3PkKx 3JTg ׶ݰNV^dwҥNN\v]~=r;m.JMz8ŗCO//ˡE^]tˡJ¤2 Q;9|w֞n+@ Ӳ۩ giqHyb$ڢ;Z{w?yBq-elVwCĴm{wj:HKjXom'?BQ\;8evi^!R׮TW^{,p ,^,L. Xt`QR>G{'rKZI^NF;(|QfJ̟9i͖[wt\PVNQ7kbb 11?TRZ]WFː{ YLZrkCp1W**K]D+] QlɪqqJ':(K> 8GT"2>94&+t0~rń3?h 6ZE{ÒktS3*h}6{Y#JoFJ'1,z)q"%}hѸKMV345N*X3z$G@kL)j/>[F81pkϗٰ[db(+ϖE%St)7Ώ.eS YHȖ@3cPN[ƑZpM/W`FOQ%p}?UÒn.+0?$V+Sa뾟 I*1Ktа) eLi; }pk K`]D+MFu ̲[MdI pVj)1- 9!ҙ-'&PY>eŘ$aj<ɟ : ti졡KH%mbBN'43a$#9Exm-VҭZ)Sh>>FS/?;*ǩ_'ɮiĘG$# ABƙ()*gnU))hcoQGHEc.L&!Oԛi*7;1T嵣3a>C#]g7uiROhד=i{o,~K sdWs"٭LܭL@}9c=BZhnEjR_!,EWiCWak_'Ce)vv8{){ba82+4ȈԝA:̈́m'v Glx0ƿy!Z2loqK/YDݼ_ "6&K)]vU`RʖmeAwu<7Y{qto1hћ7xK[Ut5J]=[JhNs Ռ׈xPM/Z]2LE_D"*Y0'agQ;}$ 27,>jO}rs@..[BdL`uU ؇l:6^IR Dv:T NSh+'iI} ";x.(s(x fh{Kl (+HBN(()zC6GJxrl>wAk\2y}ot/ػČfadή V=c+#ct H(bK \tFcAX;<{JsC0غ"x%Qw(mW <~}e~M1TDG4[~sedW {VԳ!c GfwԨSE2yVC!4Y'(Vo',@2 VQm8eSҏ:\ۇ}H hKC)141D"EM~)v3$.Vwz_o3rb2c; i;ΗH7k$$P(!!4"sUF; 7Q$(c-xfEP*IL!;}낖 WМNӧ WuNd^d"IO߹zo@G| Hrr+"U(#`beTVT"" m]RZ8$OH(kp kcl:`ERHR OW I`Ű&[1}GḎ;.3y[Lhy0v-"b e.~^>cmxF30cƗ +=5Bưp+6YIvSWwH ۼwK rAm=)˫+>z$8VbEJR)kMעJ oEr:"ֹ"GF9iS tW(ŅgEQ$V#ђ;ʡ?yJ@ xa5ǯ.ʁ-UA9%h]{JQT9\^B9^"|M.-;h/>R{]7իe5ᄑ,f#ĭ{U{e֕{"[RP#tEWهd7lO*bYwiv4LKRRpDsΆKý;%,Wp|cy6q1$ %`_a{kň|I!?O;/4bleMg̿O랍Qh rxz ^g]e};xu(@ϦrYBff|"$ۢ,!dB[7}E)_GAqzPIMՈyuɛ-T#|%"P l'Б*Ħ"[ JY `MEb إH]AhI9nGRguL4["r8Րr>Om!/< BSP*2XZ&B@Kp" LYj0`Rrm9!l9uCMfrsrM%" l9nq?bsjXT9:R'xD qxt`?264s|X 85O: (Za,\X R%mMRY?B6OFΒYa> 74&28:\'װNlPmUoو`!'"rD  7]fۈzRz3.#:>]6b貅Akf]"c묶;xSJmYLp X &+3s]+ԞKWUE]cN=qQ)[/9xO\w H}k0׮z|Hrg% tYqLJUE56ph~.ED J8!%i*d 6|~@ PdX)c0,SP;qPj8 S6u!h@p-@qȰ>bS!/6'# W_ o vϯ tFojC7Aw8-\6o_&z:]~ٝ2 % "a@ʋ<)eTh%̾,QǠsag28k`)d&08?F&<%6!D+J55BJX6% b2 RA(+]PnYW5Em;ܙAlܕ0d!*ڶ +ʊ#S3!i<>)B`5UFM'~TkO 9TE[RrLmUGu1 wM+'2WV&pCbCDg a7{'p1 ®ϱ9R7sJ GjGngjp(]8Ǽ6^rI|}$Rre] +QרjZb@dΒ5\[ EDB`D'iRP[v|n<~slN{aQ#AVVl,ID sd4ZtvW~ SBH; ƅn_%ڙs`Đ/!;RJ&sYwjK!d>Ezv}G濻x GoQj?&W\M&̉;BP(~Wܐ973q}hNUJ (@Rwp`"UPbj1bGa' ~mw}3Xutu:>t3y LL"֏ o1[iLϞ &$x?l>lngSdۯK͵_ֿ;_L :a2@* /U9}X=+` %#0mx RR;[4 @U@ 2[[`,%&BP}y5F@]]WOIHY۝yfی2&2->\oiiR쎭@?ަ+ 6}oo?N&hMK?Ubg댭 S_6Nrcl_fzDel=q[Ѥܹ"ji ܸ{`^_Ra\XK!Ywz_c&JYûbGl⽏^4ޓ-.T_b)<:#Ρl0?3<'_84)%%4!)YLDI7-dODe/PHr D}TP0 FGWW% sۮo(ᤡ>4 Z] VZ`7n+ gaEz(eZ4r-.Q /c!2$ڗNYL-^ucҝ͋BTƤ O1-cxm:n%+* FQ ^}\+crUG0AXq E^O 693w#Q2Iܓi7>*5ʺn>~TF.v7%X e4oh{e4>)IS揻~+.5Y/reĖz;sTVJ.`^CD)!E_Ϸv\F/y'e,W,,.'N6xZư ,M7$Rԟt&IJ~ b1|/Wf o4ڑQ~EUt) I U|޿ʋ Ac7:/6hKFN֚ \i$J6cLV?hb<\K쇋$݈JJԙUk"b~8帳=AtN0 džd9P5 *jC@M!UTֈ%jʦ{A-LK*e+4sM#JAI͈H?hcUC%zIV^7&Yy$_^7i@"Z蹝=z81h!m݇o / W+WJ/ǴftOm3(b%O#x֞1 p{[U1*-WA݅߂>0F!(vi(ysէq!mr>\2 p;{s o~7 :M1$IfFj);TПɈ"OY^Ȧ"v1O= ?n>wwC G 2ٹ@sLA BqC?@h):ŌJ\3^HP)rW؇lܔrY).`mjXJv*jOQ׳Im4zg nPQ ~o9JPb?~ aF?&n^0dEXx?5.Fi|?yo?}~x{j{D w;/cw>Q-^\#[6pL(ptݼ3E} ZKm/f]۾.3Bq5(a^C)[式¡wKJP"K"\*j r.I~I#51-]v\q Lyw4_9=Ynp ?hn@2 ~v2709 _VV/S<Sq?Yh%;oVx_̯Vyja-K|dYo,Ol>8}ݷad'N.8 (6<]b>b)g3N_UP6lEY:a]GtĞ$"S`hr__LG ?GzфEp|NJ |8u3!:xoW[ DuqG܈s鯙* A7v,]|'hUf[f˙W7W5_ߋ)]1N;\^`\"hUPIQ_1̠ $%Dx}UB03:H8₯d&2SCiRzKGBu GJa@l39oؿ22<=ɑJ|o!&O*SÈwF\;`b2,fK,Q:L,7"s6>Ϸ!=4 hGmf>NFGaOg>CbZ?|o3URi~)7Q%;_Hݱ4 4Nس!R;ii LɚZO|0?g?x|hi63/jװGyt @Fa sb ˩W8 B:q[? SpȼeMQ$j:)P*'m۠[Xl']c8ڐH*2 ׎M/XI:A9 )"wTe p:a /qKg3:q*BGGEErcYPRtzN0 O s&7pƍJۋl{< #rd2ʩ8h S.BfQDWxa΅ ׬ Fڌs&S|ɔܡ d\r,1101vǴr򊪘dɅJܬn,$EQ34OJ"pIwJhf6;6."'9Aa2nTxPsO,ںtZA*Ar̓ebt]I9#OMv$5NteJ ( Xɤv#XɗkEW p]"a,Bxkk!mNr 2 1kr`ӏwc~"7&3y2s'3y2s],B%Jz%STDLJ6P@BBki24xG嬐-Ԏ-P+sF+IťaIV$rcRS F!9P`PN-$ka!ph `:go"L'qTHRJB) "*h%d58T@a x٤l{&9k<\N0#*g由#i&br4d#O5ןYE߭03'Ŷ]k? :FXZ-([v̍Z"XoKEJKk;h 8d'Rɒ^KO2og)<C"6>Hv@Q§/3;0,r"x?<Q/囈I^[Z1B+|Xh, gމw"^Pi^Hy_I (*'%J2J|B%UB11aY _)RSf}&~G,w|LIJZmxy=jKer4\.nϾ8E`o3@%!}5V񚭢CĖ'~Xgub-f]=/ŕՅ祸KYfbtb@\)hKؼڃr9[-IyfK52X8ĹOtrsʛ:|F+&39J~*y wDZƯ#-. wJ2`0Ј HRr! u .sg9Uʃ Α..5P~s$+qXF=d(b BB^;p`"EKrX>?Ӗ)yu=\#|?qaV o0y1Y'ڼm9WIHx |2Zr` iH dj1M\0so/Lh2K%h{ߵ Ӫ[-a].=J ﱶW\*B|4r,uI J(25-ԈS^'d$cAh$Ng$wI%I`V8$ԔdnڐXaY )9¹Jj3oᡖUlD*B3k)Ab9Y%C :)Z{tP@KgKya%u_AҲ,~j\]e5ֱtU6b@` 2WӧSf8|s Jr,[/i=J18zs"8}s>}ni%O`WZ5}<2%;wZ"4 Z e= ȝ@GkBӬpC*+1>v1LJ|d~\}~su}3}@` <֘n@ΐJonר5԰Sb504/nN`"c蘠"LŭPRnp`2w5-|Z/>&Ab{+M6g}u;/+5UEoMkH3by@ ׳:9\] pat}2W\qo(oJwejͳHp[. [bfN_"Q;65/&Zvݗ\Xi~8*=%}uSW%h!078x>F?{Wƍ/ܹv臁 w,/4X,[Io8{XRK;Kd)Vx}\C/4U`мxoUrۡcI{6q}NwPFt<|)VJ*2'ナǝ V}`E|}QIV \0RN_Je{TUj@y)(/S&) kV~VpCW$W)̥_}h~8zJ^)\͠Eb.%Ĩn}W9 i*dO1x:jg PBS:uk'm{(w s~4Iv3q{k#Q9+-I!R oR!sT;N2=lD{ern$b:=d !(4+}CU6|cDz1B1"8N#"* g "# T[pTZƄSholaoZ>Kv̋opHg&wb{L~_"]Qd{H_X Abr;Nf8ncV^~ w1#d.yEˆ6 RDΘ#(7yoBaOTOlT^4͡<[=@K/T z%g Ђr} @qGb@ !+7 rɋR7wKhhg'Zĩ,'LHsZo%A49ޑX-dR4H ::zfs~ `AO ~{.pg7޾4|~I-LʧD,Ce/57+A*M hω*XL|Rk[i)BLrQ&E?lu%u)7h ǏG]bǍP+eYz<ˇ#:[ 9CK4hF}# Gcx(n;yL+m*ƣ3y"g䎲yu4d˦cXX=ɒ,BzZJ.]_xb- (XzV_w/Ix2x  2j@-Q$Sԩ$LsQ7@ W|&CJΩ puOg[.|J$a}I91 'hH✔9.v !.H1B@O\A9lP28;KeE ^A#3}~ʯɠBAI#5͟$sP:*˵gopCR寨 y.Sjsp3Ix{pUϡ? _?_ލFqlX݆^>Li<@lpeR9cìvKCD]q(ޥ-o.ϫQ0WO7NnLuvs4tWTaHH>DZ"{7"sʘ= r%,FJT,W|vݻ-ux+KY /  mbqƋVh-cW{ pΛnUZ+Fڳuz .P]8^D@΀{IbuRF2%Bm.Y&SԨR ǑL9K*n+[k5]u:1q\# {-LɩieuTk]8ba5LȆ/-i^?/mÛo'MIp:4.4)o(nI3p$f)X)'㰶 .dZ0b(Eouem|q CV<&BO=&Hmo=PPsB"bDjmL)N#]ƻ cj%xvƼ2{EȕA^-bj w|- m>"ml İ*dSNj օdtPzoݕ/Zکְ K_f3 Q[\X^e%Z\x4rg!QM/`/v1* 6ѻ)3 73T͸~Yn>ŋLa}'kC m=t<9C o#iѦS@sc#M/OtNK'%q_amKTi:*6rl&h&|(9\t甮csTeTKv 0!12?=ܥp? ;7v;SrA1-iSY6֘e8_ڃ6'68-:v[(I_5]7nsk|[#CڿV>QXrlL礕-ӡךn\s-;ڔQEN0 QsDtO;_ T\3^ ? }Ikr$<>CWzxWkQBk3z!$ , ו$>bJINh9drB#ׁ9Crs54쀸3:*eUBv@w* foEgLb4(#LKxhS35B;.7&!'-ALNrj0A=T݁)Uj4ACF>1Hī;ڊxUb,ZY(' 8Y{/98?ծ un KHpL55n Ew mlܭV|Z㝊"lإzp>`Bu6VZqisLYgV*U@NyV,(Y@%w>#>C IԖ/5Phq%|xv۾o]ݿt89egႜ{3!- ed{|.,n?ʜ?8taV_ͲŮP_b0pyݏ<nG~ &w֣RQUU2T) 2*^0t`?2 7d]8 5@F_\pkxo/v A4GGjOnQV+]]9@bY\p֒3%'|\$nR2x̵1Jtk3Pb|p7xHC]"ūgdCHȀ( E)7?rdn>X Ey7DK1Hl&va("2`%7xkvsxj#7WnLHE)4PTH:Xr7]`ska1iv X /8zům26{~SJs:IUN+3@[GgVL kmk3ꎓDQb3Tvw0kL<=/CRLl= zgQ$F͍Lt^q**5eF#hwTKۏbcrt(P1\>&k )T`XڷI-fF* gyJO$؄4T}^tU‰xui-DhKsz-]欫E|׍[5?+QW%Zh ?ہY-vόߨ-wsu£~9!B{1'AAKkٴ e!U+&dtLsJk_)UDĈ"gKtbН&\Tc+ m| TT9 qކ+.&-Ms2tpZ[ΙUS%Tl+vdlڕqz  Ew]Reb>ɏ)9{7>FiTq0 F qϺQu@4L9 pk|?;+%p]$(t9Rb ryZY&8Zx6MkIiz'fV9ǡ"NR ψ1N/Jn2B {O6tpY9]ы P-g\ \ ݁Hi{j)W=XQ x_bd TEm&@4ZxgZ ӯ"WD,AxJ\]q=\}etͭYǛCgf…g;zu;L_͚~jD]Jjv/LJZo|<り>Z]9$3!BkD(^sZg5MRH*ύ4]KU2\ \(pao㋢Fbx{ Rl]Oסr=PyZ?T4l6ΞdO}oiA҇T `Lػϟ_x~lػǫ FLK^zGE\ h< no.Sy][έO7y=>\'21/]B-kŠ}K8yi 5F2nr.[.Ύ?%3Q2T$q`+,4ZNѯ+*bHlff*|?KeWD%&W[ӔGeW/e0yx Er!~3 \QdH P[\qO>M}%KHH%5 G'?I<됧zTCu9OuBB) yʉwF: 2 Nn x)vsχbھ&y-uz=P(nA`'O`54S2)LnB2j7?>4>%&hKh[_d4=*`s-[C AJNXɡ1벹<9KI@(u˟c<1Y%$Xj$wi*rtJ+NpmM3C 8!<'Rh$|ⓨ(29sFtKBwOm#_Qa}ȖW lBm( PNW$;Pl _X2f:F_h~Y X~WjuJF>=G@uj~3Oť,T$jCsdImNj 680`4|0s?V~gr^ۀKhٰxAp(!QwT߽dFSxM1tBjRx7 WA5S]y H )`:6-OVhQ}lH!l5l[(ewksg}ϫ "V.p%&O94%6ύ?A=ybkC/Zbx $O])LX6Aj+ 9a}B@GBpc#4cހԧ c }Or76?ʆ4P$&8kc @u7I4 %I+q;AZe2wԊRvFcZQ+eTьi:m-s*;xcW\K)ʮUeתkUuԞWss_mWŇ^?!24~͔̀h51n~[nt$?oG3$N/NIQtԎ[g2WՅ{*ˉ?a}{p_Ng(HN;3z?Çg\w1Ik|t|cLrm޶E*+Î Z{DnM^.m.pOkI7Gx_\]0{&?=>-T"3G R:IǴӜJyt?FKmYM kPn%&̼HȓAm[peҘ;%`6E~pE4gc RO[eN_⦗޶:ew=];(zJ|k {uAG=ԩā[_f.:mw@ՁSݟO5gqڛO18~Ӂ'I[7$oʗ^xo!cԥ? ~Nj9N`B_Z{4k߿M@sjۏݿGyD@_^p K١Zu4{:rmwcڙ!ftC2a:S=-v z2;/+BM$4>61)[Wi1n- q z0AO*l sIiO{;|̧9PᎶ>S R@%, p0sEC9``;q juh뙲ӧ0; d[ZԲ)nckxD-oAB+3F XDZE? B=VƺLo gRc3MśL $" ED$CHЈ!܊<D($#cxZB#.g)xnD=$2>_o0ZX0NG N^ʚ&,) 0N@f`qݴ{EK^}`m9Ar+\KR..uF.&c;'Qn!vi,氥L̗:o{q {K95u.+6$R6Uk2_ro["o n+6Wx7GÖYI~0ty,PjkrW(QPa%}[QH[nɳZu*coaj hTk)oc((Dq̋ }qGXzhEˍwȝzhc cL({oJ\0r(+#J\/Ygi J Z|qkJ:ST0⏤;~㋓.3Jco.{zϼ;x`gN/:Y.|qBw|xvpwǫrflQ<]ِ$gV^zu>H$=9 P ,(31^e(WSB3Ca3Smct-"M_BlzS|,$'Tf(N%/t'Ws.H:_.UrAP;$ZAH-͗"S7UG"K_3j9C}(GJ :c +U$|%OrR]t'r|-H쒪S\\TS\kds.jK&Զj;#ȯkIe$_K$@ԓ($NNg ;IALB0 Wr#I^ͬkg:_.[R7!D |E7(2*[ L C,!<8TX}d| 65ylBR'ک%3峿ђUBt?Y. JH7TT|֒Ӊ K)$k߇'Ik/3$" }$8Ѕ v-cq Ӹuh~|; A"7nm,:n!c0o$yEI~S=_' S8Nؖ'K7Q s9_y jqZ7jq6 ZH\7 Z"\;^UjYA%"%xq ? py^]O&9Kҗ-JkYdk\`DM;01N =}$\THbB] !h+:mhw&v`48Kt]8$i72浃Ak19hG~%K j_M( aj}tN2}KLSaU Dڍ+pYWw碯p}U,jv1'<{fN.áIxh.q}~ BcYE6*?iWsBbNgZa#VbWVnv^"lr4AؘשH #*UQ5AGp=Xg7JGBFv&!Q%ʑFQ9B/wDD*2, u^(Зn-yj~> }OP!\b~~ENr302 g m DpΠɬq8/eC 91CCH㗎hy(uX_@ve%V 6dpg_ 6ܫގL8 YC a]Z2ȕ(ܡ'o]XYWo{?W7pvxdBpw¿6^oed4ݛY Hk^}{O(b6v雭ouZgkV}#bi?)>S=Ԟ-ą1ŗ2-ufe>led1&eR.oM㊟OIvx& {<e&dgʚƕ_Q!S}QU|':vfjj&FJdI!$ P;)|FE_hu|\eZ pTT .R6_RC * QlFY;g]Klf\zaSf[?TPK4 B$x |/dDq=!BQ1I s ")5ʔAh084* A4TUgRR%y2K}xJ2W]$#Q_FIwrf4\z.9Ğ̩>짜3mJ /y8261r^e%o]gMXM'Eܗ?IRTOI@y&S<,P,}ZAGMGBKҦ)ܭ%i-[i}Gc>C5w+ZnݗJQ6SQ  r}-' qcQϳhZ!}p#Umq%/}h.abP IpJb~zΘy-βǁ<dc {>$'"C Q7tQ @28er=!W 5K ڥ/̭g)sAn @vr;;s jq'ORj+q:Nr]} [SxrOTs8E5-~6b9*',pj]K^%8DzJ1!yB&++Ljmk mzR^jBcj<ܰ3Wd>fw x)90\R>DK&$)D E>@\J$dM,r,oX(ESiڇr2zrs4-=h#FU8/GS2(QrYT-0"{D{gװjض@C,J6x/nM"E qěbₚځD (*IdVO;8ou@"QXV&iNBdċW6t7leŘbi=|kux5Ol@0{Vܘ7s;p;c^_dpRK[ #$NrG,#G^8f9Ӻ<%?aү,@_^5 k, ɊԧmSk;Cd6lǎ%}tn|`hbgu& P܋SΖ??iMey٧2L&v9Zu=,5}Lk ^o%m[FvH\dzZj{akݻ41Kn|O21%7>uSC)Z0DUpb9_-Hd${y,v+ğ؅> ƀX{ko$e|_M^~r1Hvτ0(B€";,.&.҅d`vh݊`ڴLVDD̊w<F1UzRz%ʟ=k4aM\%k Yc%dRgF53{;81T^`9(3aTo:FI)nUF0=4zUΔړDglLiB&FK$sLhQ.U:H=5ON+Hfz6Q$~; +8glnuX-u?r4so̸-Ws!>l:J/!)0`\ sḡ0cIL%uC:+1f*aR,eR`aFh\5벂>;Pk`Gr#hU|8:gNL X#L+73Ď(2BVKqp+4ʳ_ V:Nf]R[fg}c@1yHf٭|/ qP/ccr?z ďAANP/Gg5ޜSK8ɰ%7E\њ~W9 qunxf9k<Ʃm>FoS-bĉ n"ZD;+`R0Y?{6p\6IUpo973D:O (v:?=Lƶ|\/'Q߷_zQ/n7P؇k%Ld ER$ס^ ](@tXOqv70MEl@̜7MݧFݖkMи1ox1 ţ} ͒|z >PTƭѧ7·qtn'yܧ/^(cz`v<oq8{ӫ7 _߿>M7n>\__Cjv۝Ӥ]Oo/>/vBޘtu1_|ӻC{0}[m/޿Ɠ}O}` d.ߓ֘<~ s w;A44I9y3n^rT$OfJ;zz|.̤RF GW0&y2yW6?)|`O&M n8atY"%j)J%ߙY%2eĽTH<8f[>+yoa=/s( 9M쳾y8O q-gpg҆'%`Dp=F}|>o':(}nv2gH{ֻvhG^ݙM3՘o3?՜@ucjbc`j5拹Onڝ ¾:jb$mջvw}߾/-q3^PzK{ &mͷ7^^L.:IW1)IEoް?"''W}T@C;iEЋ>&ol(m6oږ"ڶnS^?ٗ0gYאrICFVqjmrn6f(cC|?y*8.u\jpjkZmҋ2LX`ZQ)H&gQ0Z(@w4,8(sڈ}#$apO,p{{КC?Oܚe̤V̞v"oi-&UK-.XK-?l 8kieZ>bn'[%?_G1zRMJ[[ G Stkzn&k}[7l7Q>W(z )zl2X _YfT+hc=M0^wh﾿t]]|"fh-6o~gh|f ~v9hzюWm0 ۰G!ܑSl2.NQ[McJ F"=i& %U.)\+hxLHt*߂XXfb͇gcf"RŇl%\ ڷܒZ乽QxW\@:?&Z+jQOϾ]LTK-bmcEH;sgt!_D"JN{0׋=6_ Ȫkq~ 8cGiyyц|MJc[7}7s9 栶yy'"}/rrK+% BBaiڼ{10) .QF'"~k~Ygw]ml&@gr6C ds_Ne, :?rZJb5+Wd.`#xqVO=3m-ܱ~Ss//L< eVze e'T.k^%LR*[2((E %9ݵQy_]8ΝpK J \2a(3FSL3=HCpIAy-bZ uz[!&;Tu!%'*KkIaKRQUterw/$2 腶4HF#dE GzNLy΀V#ɭỼAK $B(͞gD " ")}tu$rCs ‘: Z4G{)DGw蓄2#ep@?!*qgJ.Uhm:z+E^i/J{WڋJ{-rmz =S=|y9Yuϋܻòs2߹."do`9ofdN![E_7wq~%w]>[,+I΄pw?-SUZ 9+N_'sD5ޟ ܢmE1Fշ_B_3\T%ۏĘOf-FIκSAY,l;h t8&frV`aZphaT R8 \Gh%P}V73p<ӃUX*=tєY+5M,EY_yR׶#0a\!ՂmjU-v>ePa|Vwu_gʳal+LK7.[Zzzuy!D7|}gYLQT1Iih77!&- 0Ξ'}?^OZBz4N6 K$bZ)zYyVxFT1%((Ȉ]@R D"vP>NoZ,幢IpV`  ˸e 7"#QGLeZ&XTSERp. ǢP͌mCRM9{{ jj ٔ 'y* O9p%#GtIPg"JN`41$˸ bv o $l#ql쒅aK@4'ghs%3VzPņ])vz7$ 83 -*8P(Iy4['!;|ch wg J)-8 `*.XME'LqarF%;:/c; u Y:,?l VDv 6KяeZJ<o,W5jvUe$z( JVO-bvde$ju7+3QeTeߔU> lJ~>"\JNUʽn!] a]W3 ^Q<i?vh>5Y8TҝY>iYޠ7Y|Lٝk}ŷ̨K;{A"[RqA{on}^֟_̭/@﷫tWL?^"T?{^ǃ0}RОǛ;{j^CSOSrӛ 3u?|?ܹ0Wfxof,oޣYIuquo8flIM/w &[a<%=Hz>p(og< T K0}N`lz9__/(.~ĠtY_5ջtni ;\`_i+W9jDEHzPX8g+e~]:!ﯽa:нh`;|MFJKyqI]KCB""cxt._NYZ?_,}* ̓BDII݉k0ĸɇN)gwҨGTKM O)DT2B%`H)hLːPD׺dX43תBQ~ɰpqk8dk_v޽* NZ^|Hsjl$B0(לR3F&Q%BK9F!1҆0(P9z$#263LPC }1 vHji9X2M2 4q!]FTv TS,d A*7fH|Hd) V =tYp?v*qґ=I||vg2eԿ(_j\I%nF)jݥ۞ooWy1Gw[KK `Ls^7ӉǽkMٖO!dzrο'/|y۟٧쪓?5U@Xp5PjV6_:+5#QX ̹޴5CW|x2NN -x+b׵*m#ऐ[vlo[msivkm%1lYK}YƇ?qx7Yh.?.XhnW~ 99!Ƣt x[1c= ,w"ӊ\˵N+q,nmLW'nn&ԼrnzL,b^dl Hs&^J ^{fiSmd~TMي WFڴ{ڴ5 lZgI{V=>v㻍rCh%(du.3m ! J5ʓʣ&55 (n\vut++.v ed"VPa+mMD쁦i9 y]V byqy)$NA..T1&reY8i#L*'ev˟{{z1tS/GV/GQ/isʱa΀/eF!qMr>,13 GKU`@!CDiH$@a( %! 4ʬOa(,@57z7)oj )C=HL0 "Nl F&3ϏdpB<9#(X/J5;*f Ѿ" Q, i8fz^cNSYo`} ϼ㸳8l;;ێ؎#{#sP}aP8Df Q?!Hu|X@_hZ]¼Pe^ܩꃺw|uwfLf2mիLgv&T.{a击KǿW`w|QI lv <@!$QQ!ADBPᑯ賃6G\ _K9dKW # + Z9wRy1۝~;L nkI|ӛ_E&l2M2B&yDv>,O h71|l?YZZa-d ,LK@+Y^NU*IzC[Xj(aoR `E(ڠ{\3؛ B_B Y'%{ecY-Z q8HAG0ԏ". ,`IG}R{:tGwZ@#Կu^Z/qbOmǛ_Gx.1˚ W-(}v3JOyH}ұǰ ѸlY *4"BQx`jo8柳>U4Ox{2F/K8N%ǃYB""xxQ<ꛟd]:' 3ylyabHYaۘW8T& JE{ӲT}0 ä'868?$Xcnhl+S~vHC\Y'$֊ P6Y~6 -8[ hq'mn-sϷד;3\ ;A@9\hfr3^L@p{AN $h95'KЉlΜuK(m܅w|~bqͫ6WU9ؿ.bbMbg?93viSf= ABb`;u@@ HK3 L7vxA pf>Sxi€`D yB#Prc( 򐍈41ȈǒOs8Ibȸ?߻x䅽 U- /w{ -nC8;f;4iVv7(%D*+% *rMN@u y P3/^(p$Kʅ ^3/nfE)sfu3Zg3 4H6.I>H)h6ueI7%(G|A]݉RN-]zK ;? %n)?/,Gs$(L3,\:Rp-v6_/P0(H4 l٥QS[R"nb&weLҒ`-}׾{;wF!}mǟ_1{mScͣ{Mup*9?{fDP7#)#HG,j ڠ΢Ri(Y'<[mq9lvAjTUs=a:մP'XD@kJ*V(vl ?3hD6I3IPPh4*= 0$/ Q.88pI9u">zBőQD2O%b=koGݡY]-@7w ,ez'AUSFbgÇ0zQ)k tLB 7F{ ZiUތ;"(РIܫ&V1ߐJ­LU]4ԢOkbkYGorK]格p!]U-}}?+:O.'}R&< N:NHc'1DeiDA*H2Y?; zg׬#_ǢD_RZ`e Zj faܶa}ȸ+V7"]{L=˴4dC+gzqs}LBɝh8q+FjSϪKHܵ<#.m ?75;3jN čI5 eLNCqTV$ەgh_W,jٜU鿺1QStb3 D'S=JYKXX&$ x)wEdEVR!;`X]$KYQzmGo-P`&gZR# g0d27Tҙ9Tғj(_IZ#(3>;O9ل'*'e2oLeNl|uA^$jK7OOތR݋wbҼ0/0. h[R МYo'|5JO^&/j섄ǯ<ݎ]\ /XP{-]%J.ӣeՀ)K|kYX{S pwuEk[ޤ`E"i210K &u(5] % 2zGYkk|:fLy"ZY4V-q6)$0ͅޮp/Hr͌p(6@\JpX, H栅pL>&B3G.XT*ɘEru׬= (⫝n4S M+%e 5eU1D\^.[u'صϪQ@lHHi2/ ?g[h\V{(*G؇Lz֐](^F|O-Fi_>vou/ Y;S{IBw|3(J圇~~ 9ewE [-r=g(jjetڈLhӤՌ'xtv~_ Q4k܋z)*rT9Uu9j7Af J7Ա2SPJ&\7{i ZʜDy A1Ӻڇu{H g< acK>faUdW菱scuNj3bkW=1oz7l5ͥ}pL$[Ix$Y*QIK9@}恂`21e %0p+`d szJ<M%GlCN:t-t][5'3w6HQ7$yq3!53ѢUI30U +J|^ei](VXO&ؓ6 nH3V HdTtd6 1fHAY5OdXdx H9{vgt4 sD;!wBamS\>Uq?~Vg#u>f}ˍ⢶y9 CX6_{i|Wg>Oɟwſ𝆹𝅥𝅮D6iovM]<5v閭JRtLxll>!%I\6(N\)0EHS]Rx-Ѷtje2RO ˄F= %߳ ́7pi`--ha*%2MZ{5#lq-s,HE Gx&'l5k8l@/.fg&C2Er˵!kh9@pڋ1)& 9B9>0rCɏ=E^{w AjA31" '.f>RqNA)Ǒng'12ϽG= rߪ?*8R>2jJ2K]f];זC wD&n|sGCwg̀ 'Zjyг B7g,V~|b8WLYtnN y]anP "#ss@i.Tsn4MҼdngn۸'` yj(H*BHU1U2"l-jNFV=oKR?#SSypIONH#Nqv,|eYrcek$ Hɲ2b6gBiqF!rӣFrs@!aǹgSfQkˋ(%U)[EJ |,_,GZt@rTֺŠV %G 2Ml3$BL19&ZvHmN#y1OQpF"6Q8H]h炍H+#/Nڏ Yi^mCZznSmLOoS7oSmOյ _sYr/x j +w͒ I1EϼH2y4 $N|̺K^v֬К y):KPR8R{ɍ(fVbJL{E@Ũ ^w4T[y{dRyxB a9){-Z Aaչ*rw1zT_ki5aw-`GRGP(θk>m@^ꝍ0r _( `Ы<:Ob|Vঐʵ= I²P}79N, U̿lFogoiB"iUzK=>Ii//37F;1 Va^]tm>Sj#f~`xIH|8F4Z~Jof >PO(]y0V2qkc~Ԥbll~"ḥGc݂*ǼVkbڝ-̼d*:@Y񐶪~D8ŶDMֆRC dB53М`ؕ~R{_fF4] Xi~Z9"v^.ȊᤠuvzͰ}SCj~F3%a4O޼pvO^TNfus3Jڽ+́#Z6R[1"yJ'N= 5ƵÞZgq2̒޹6Ov F@AYبP{PF4/.lOLP-D \w`k./P -B-!jF|l[JvAEWmO{wEDw3; '9Ԉ^UF5烛 ϦL X1QsNgstٗf|%|Nܞk7o y1ë7JƧ?-p5]d"DW̙nW]i:7zynO%R*n*MeO7Ŷn$\jƩ- >C%z=Yb{( j|H;utpZ>g!}k{㤂/A": q'6LzI.Ad8m ݚ~.;Ps$Z6%]P>=~='KGzhzd|-uqή)3]JX?[qHސ/Oq5o+_+RA}+ס{Ԫݿ1OX fRб$ FfH"EE]3:RII8:csG6e$]X>87-/\Ĉ֘htJD09T['FK>0͝Pji )l [m9͝Xz%2AZ["hA ~7PE,PD${$CQӓNNnXsZFRmZuk G˰{6x.hHא \ _1Gy < ?z7^ c׽KagJݐ9m/FZ+ouKgf:[6=.NA`@puix{֛)ӥFuѳFͨ:U/D$wd>g<^sh}L p~ڼbLE7.:ܻ 3۾QcI7&`7ٻ(x4V )]hL,V rQw9ڙpʰkKY}^:ܘ9w $0F`p=PE"$&EۡcSU(Iۇfse͙ /3*j !!@l.^ S2E!dew"jT7_c߅ʛȊer{7_^Ѝ&L̞bg9omWfqGs_R'2R!NV=g+`L &j!Gďch9m+a`JVhTj!7~+\sl+4Bfe+^ i%"/=V*pVqWSntU2w$i\b>1.xs҉~H&lՕS Н͍_Q똴w|I(Ֆ;C\r>L-^u2F %h[T2$t}o(ƷiU!9gDUic96VW+x^jp8Ôlp,,K95ģ/$8晤^oY$JZ(.A>:ch!#+~X>%!zbtl;#D"D2THђo"yl'=0XJ5}7nΐE(z  ]h>2֡Z[Vm0G>~8o|wo }1Gv6̓pdҾaqp[RdWCг"oERQX烈tRT,UrJ%fZ/ 0j3"Op ۍ8Yt}c[LRN?ޠ2é?dxE(Mn@˱H\/ޔefJA1W\tr;[{]>kEʝzf jZn %v>?ߴE# tWĭ sQaD YsN|6Ưm `'Gv`В"Dƭ0TcE2i6SqoD֡Ў1^64 GG:Yκ>~tKw*G+דhY/G9>zr4hCI4֡/JP5 $?x5Q\[ZR`f!MBj#5!$/Aj퀜J W o&Xobد6!;(r _!ί^ȕ(WY ,w芻_xmhwLW uLCzk"zv %ХREP5'\*&,6Lϟ?t9oMs ZL,j5 (|XO'C=\ϗ}a?/C/GO=q ݣ<ť$$ Jat*n|kƺTLo饻i NRQ>)!6{>xoa+ndEE]R"_TA0HNӿ$#ETQShbTpO1 cڨ3Φ1{er Q]߽g%sOjj_DYLN88~ deiſҠ. jƴ/"UA ]ʩkmhLEr<_D3(-K5I75`?R9PezJ'*&Lhw`SՊGX`vٌ(hTCU-%@ɴf'.2-m ,AXGYsCWs㮳 r+=?un G) `ە_Ti6n-\uFG8[yR&-k%WmNJ7CͽS_Ib|H2 RVH*#J.zQz=O  9c!8-Li D6"YKx5"=^. ^9YEKh`73RP8yӚc%!tN]iya)\TO6H U R:$ y@ŨΩ%4*K/!;,7'5Ϊ,ӛf:P7x3 pMC^Xu5B1q~{jAv{Z=&t|r+<&'Iy"ݹ7"LhGUc2\B\a@t ci+'C} kYr0+fqZ T$8H98$FB1s\J# c"2.M48zp&1!q& L Wff ixXD^!hZP)SA$A`=S5cIh✥594i^zfw`뭶(nP9S ӝ8~M.Tʫg /7_!I8t3su7=:^LՕL/k=ը&ϭyUZsww_o5;7%^9SG$F֡ry(a/[]Jkij{Tc3F o[3zf֓3xi#MM:7+NDFu6j}rZj )7Y;IV>rQ锢vZ& NXp"[=E3@8pvgUXdsF&D:;)NI-Cx+YPkӠCGO@9 LBA JnRo14'yº!zCArC."OQj"&BhK$2BpK -2dye绐[ o=}|y6/etG[zum'ssڒ{1ol5Y* ۋ?M8nN3Eϳy\k{;Iq6ˍP /}4^$ʂM̛>L\Vy+d4Tg:?{6俊e'uINvjJښQvu_%MJ$EJƯ@7DZ?k.HA0e TIwNLk3Ƹ?3ehԊ`AL|j]E?W8}>5:v`4OK:6jΕl۳`<wqk7-h^))j?|0_o:8Utg߽ -?뺝Žm6Yr8A[ x2T4~=ݘf\. -[;>\funʹA[d; ZQԝ'n@`LdcZ2teP&lJI mܺ+emt3_:-( _ELi#7E^+N]P)M=Xܛ q26Ǣ֢/ PEh2mwk<ێ{{߽6O/c<ؾG ;5~Vy **@bW DfO3ݥpل;}eݥՊNI٫f▖zqy3D׃n_W+|oFN?̻ W>;k/~aaNmk<:&U ͽV;8 O /ܡM)J?ݿ `a|å7ooa+m4+$MRt2p1$? .)^vo#&Coɴz4AxTO::)8rag7h^AX'sw-(yo _@SL 2^AVp+adKL6<|@I:y!&'\ܑ>I/) tL#}?B%Xp EP#nT`lrb&*@4c` $3c+$q kXL} 4Qe$+r,{kS?rv!<ϣ1'F>?GA]dHӲHxiN |1ނZƨȌhuy@lmmRLXs,ZtԐrV|A'鄰E=/I^ M(kB&״Qgն3:*S˵ܹ/c~V.ك`T*/Lcs}Vؗ8 d8 d*ձ/!(ǾŠ'2Yc_g3Ma|/ckɢ?r }I9dvsfy_YA}4EeA_fD;x SKmfʀpqrOKbs炎|@!^NgL{P-8Ve`]G~x܆C'LHQj|Q^F( .WeY0QA5,f9Bc,xȑK(J򀅞3&++TS! R8u?p8I(d W0cVFۮt)`KN`Hw4/Kڢ:+V2fʳqBX;=Ge?'A0b;z,%MWos/3D{G>+7No \oo}0QXQ](xJ[4fi{`'ӋEׁܑ.¨ W͒rI-u8DX1C 襤\s5wW.Lt8 ?O^6ci{}\`ylRcum6Ǔv1iIfZeXmR:"oQEyw1!x|U *5Ŧ!L@{L5aX5~B EKMdې-3*<оU}?h. rBE|"ZcRG 7c):w"՛;'茲TY 4ٹ'|s'B/P]Sle|`&Eng~Mg Y~]gr,ߤ T-oLpS_3 <Zb0$tݥrɮN49ngu;uzl&;i9B'w h;J0A%Y$\gFY-ҚkaUU9\Һa>L&Sہx]}-S20ev6)jn A_u|F?)KQ]@h8rY粣YΕeK6zO&Ķ݂7£3"O4da<צ.6u ^jm'[wv֤+ʔI5ids-uꌮ}zp'NbJQe;+>g!q._j0 ~ MY!RAjV|\H3JD:VZa|vtwm}o#zk .|x&T :b*)+#]u EwZ˛rRTuVn=]4pxٍL0iƐE˾-k%lY;+Su7c͚tִ-!S7-׃[әe-f%*֬pK(Y϶Y˳E+g 6q1Ahb‰"oWά+VsT;fQoG׬լ%%ᥫElkYtL\M&vNQ) #k5C.ʥϚK$T="5BZw&k#`dεxHQQ^F(B]1g9 yPt2&y2D R*ƹT>c}.ej"8ЈD 9t7L(Uh8@@$ A2-e s8PeAE(eD62Fs%J=-,Vhc4 5*8QfTn~!r^u é2QPB%H59v=)ʏfJET!A;JTޝmܶldbtgb6#Z0W =BZH , !Eh_g^B[č Bd|jRG+lb,;+[وtCxms4J%c>)Hя'1-YmDwlFז C+|ta=$b^K)/VVkw+q%>MRG$0! Ѐa|JCh/4+kmf棖ѓSh;i(0dQdqU{ϔsEDU*$?PR%ьrnFsy(ՈY`ʹbx- 1T D3cU-6\^]),CQ퀎9qC!&4lOZS^&Y[ RȼA kȿd׋33qR甖_sVWą`uM8 Ժ"-}y2O+*öh&WxpUe]7!M2뉾:d \rfe lŗ\aK(*?GLΧ%/\/̮_ 4a]el2ὦZC+%;!/%ԩ#ҲKban $bee8|6^be{4@޺Gemݦ1J?=E]JoFK3ΛJnrBMB62RM=BAKѴs&;K\B:ݧx㌨SD8Ƥ,E+WPDZRJkqh`U)}T[ y@XBw|ZMFA!y8ʕbIN-IIğPƌfSQi]WJ Tddfeba"H LIy_J>LRW3iM'{"/ӓ*K+\%'eB/eȇx3{C5עnHrJ׍6!A8 J}Gټ >$~*gBB`_2ט)6 <JtW `AL#)T{Ʌv'KReMw;ȞP՛AFH'Oeij VBMf+mH˺1.*Cö5 myJHG7HQŻUEf [Y_ܑ <%Rl nrQپ&eF1ƚ_p2)t:rJ\@0> 0^hQRRYaF;¶_L挬`wkڞL}&U5c.iƜ\hxU+Unҵz3Bn0JIZ(?ېJo|R X>'& o_.ADddk.;u~Mt!fAx ,uO;3_X44͹h Eow_܃bv>{[v' =0 K40XP|joͷ3zd1-kRr֤iֹ\H|zB&cc`GzEpp@071c2Zf2ͅJ[Qbݕń~ `:/(3ͦ3?hL0"{`zN\3[›xx _7o}c$o0ly"ɫ ӓ*=| d A vrj>ecp` t Urc3!QF(w`$#eOS8AEo&v_;8f*ZeAp(Ըig4>%XVE :U  FJZhtse0;~J`-,6T!-9- zNtcrI}1{vO)DuOQk4eSi`Ad .atHp)5 ޢeDBJ(Qlh0* FΆt<fJ8S:e_&XgIVrҬǘ]B8RSeN$JG’DxFapW.nd"5<^f_9 78P1#4l)l%THxe(ÒżZ+ggqCUl8WfϷv~(%ȫoT]^B]^Z/ &Ak34! HM 8߃ ,x)KT*k q9XDBd =Q* Η._kꀬdڶŒdC%qcKy 6\jO]4RPZT0Na`` PǕKF@t]vGsYa}~=nΉz6H_Vw(R5rp>Q7~~3o) |k$*D @Z RchHz<#_y,^[a{u NQۛY;uڴyi! o.,mO3D$yb*p` bPfggzOe;xtCe^Pi<iͯRxb/voѡd$sa\JM)Zlb :]FƴC/"4%yT5VǽXZH}B0kG<6ǾXã-\Ҫ|atQ |Zj :/"Z'XЊ!kabB B5VAQK^tC)PϭhLUI{ |SSj(n 에4-%~-lKx1:2N7s K/Fԅ{m + mPE\Bnxq7F$BtM$LCg1L }y6i!cj /_;!k/h_8%ׄuc&kn$)BMڗ>!Tg U$ké/_jE @Z\r%az B I%-{BZ=$W~*ljCXN^ usfGz ZK]<5J4(*1YMזlEgN9Ɖ bM 4TM̂낈Wjca" ")9V =vTvlD^,sYqU` VH"xA$s źA<"&5ȬIu(%+b:(C@N)x.]trWT6$f-Œ[O"RRF ֟"(_OS:Xj=1ԟo1`l.%;jSm=Lb81`[9QR 4+&je%Q~&L6U-n20 qa%b@yE򬰐5 F.E˰jqiZ|s8;41%[dI0I/T%[(H: `Ꮸ2kjqwfZPv):R%LftQKnRm2r)<0+W]/VMNze߅N#q7 vk;V"B۬FFӶт6k;Gq6GZ8*x˺/~&dny)-kjDC 1[˔ >P,L CF ]1;|p:@VHJ8Y"pkHF y΀]`Sd:ou7G?c1jQh 3ǶЪil |)w3POz(]!SI8 :+~ fp{u&0lܜ0kۍs=p>:pvm3x;jxGx;IxrBoG%ADzNJmyt2}T#, 9u6awr'i#F)$ 'a'??9_|bS.sӞ›akh| z_o`Wë?qU8: GYV.ssCZDژ0';c17o}_/ Ϧ\aW5Ƞ,ocL6*,\ pvk@IA p+Y4@3CH*.(I0T|O]d{"}q6T\-ޛ2B>Fp/6=\dLhj䤱& |܆ 573}4aכ#\vykoӨ (l'mə_IΓ9Bn?xz;rJ%.sdF)~F!>vmww#/3k8wÿ/t 7#Lj0 ɋq&^71"-%£`#FMfAcZR]ko9+û@d`v /ADY6$9b-,ډeSSdEVҌKw-j.SX>E Y#[eMԠNCiX-uZș 1LQg.P\_^ޥF yk֐:/SVO?o ;ZP7ɔWă/XnC3X'~sZ&3h2VBFb"?7+4쐞,f6}\DV([-x;*B5 (u4 NjPEsbQ6 {̋=,S:ɍJ}Qgx94E>CFG8WԈ/No0syxY;_]G|'װ0\xĒH7<:?g+!GN8֐U-յ\]I%A'[֫Ȇ4lHjq2UP/hw0M % ЕDDc'nln1䦺,=QbD< V;=/SEmJuoW\o:ڨ )iuIWKw'6{& 8,=>Nyh0BZCֹiCIunTSJ#9u\@\ T)wB$Uŗ(ʑb|O+ O/%nv!ccAek9w2^ E*O[Ҕ.nIPPoY|M_W;EˮMYJoָCWN| ;4=S]v~U_ר^5o= }ߜ4bjv=cFbl%ꗵ9w H6gOaH!PZk筝IdžS݉C:_lY)D?{jp99UAdO+yW'fz:իOlR;Ov3f}m0q{jKk I-%m ~5m rtz< t%HU=qًی!lNQK`'*+IwvK![,R C䝟vۿ>.Na0f>Ø nVtК C fp.ʐ/t-5be[3#g&ٲZs& AZcϭ譇ks] >K;+Aѹqsڂr [d(f]Ny2,׵H*e_ϕ`h'#خ?ȫO.5SE$a9(!nu).d(6w[m%D`w)Bù %<^"m7 {(1{U aLa0Nt[ R&nnL x\ö0MC3i8'X8+N ɍz `ۦ) 2jsԑgfZji)ІYJSqx\ ;Yj[yz! \*D%8±Aa rF&fAȔh+&黡qfdf˥e3t?)>ȒTcnHG XOkTL9Y3X 6Sİ2R{d5`;d"HH)dZI2b:N|xT'WjQ^(" k "p2Sc֩&̍ȸ`0HE \/0\뼤y s%Fy#-d!l co 8qky#|QwvN;]镋_U>649|~E?/w6H~R-DX# c[!P0ǐ jXp qaϕ`cc"cO&܉%^гW/7=^6z([nx6Z" b8,֎/H=ȟ)~+~WN=80#1g7sgú a@'F$~>Sr 3_iyEp2,<}x˩_ '_?wo^??<}[{_g_ol zg۷߾xLK?zQ_&hz6e14iSZ5/^ :d)rbE&5#^i& .º b䇳9̻|6|x0KNn| z s9m͓œGćLF, =9y{:[}_Z3C{ve*g~,\|tty=75; 3< g?~S{9_z ͧa4!f Ym!xy$|]4FXƷOVkj˖Ok֊{\q:VZݿZ5tmRB,I?sJuG)a` "F FB8S!5hGX rRcIe6 iOں5Ƅ{ˀ8-X U >La5CDQC12/ t@Y0.b Qa6+[\n릣T`Kg0BLUDy*Ĝ2%:!9wŢp7~>![DɉBQRJ$)B PJ 4s|q%Œ4DBYi^4>̾IF-A$9b"ivCzH eE@!+T [v=J)α'ե֘PL:v;mֽdf_H~ ,8k1oⷌ4>y_]hB-ۘ+ 淽hn!B"nϬ!Ś^>=j:1P U 7ǿHr:z9xȝJY1cͅfm1 R]"]ImA!K[$2UZV(rD4o@?lBG?"r$q04ZG^W:>:${}gt]FUtK39%i唤6֙ABǩHG!5nl{}e!mNZ_`hPN Gr_BWΪhA5m&~{y4\-+2Hp#R*3,JTIdRĢ;T2curM1vШE']z:dnpK 2X= \y$!igi1%dq*E.U(NfB37qס1I)W5ٝowR0s^kG~0Z 1*^sbeu~VV'>sCBuW}:N_gt= auz5J\aoz;@&+آuuMXШ㞭[vkq[WGZ5]jVM.I{HFЁ#H9s{_'Ei}7Թ;9gPƂR0j/1w靷4KT2'֊ n,ɥ15 `Ń}m݃,P``J-'sAs]iG+DY{gрv1#if†%|!T^j$[7xtb$-K*fĚ|a"FSA0!:(rbe@$G "K|;YJ0P`d֑q3[۔0UxS>#๼Qd|9ǷQ1$cvy -|7^~(ޗMc鮹M|G,"{$cCm`D8Ն``c{;yU):Jʦk]ZvԪ+"y#G;q)ZJ:VDkA!|, . ,H/n(cF0nK?Xv"bbh|MdZ9. @qJ5׈+!-K|nhJDtWSOqɶMDw{ "$e~؎Xk1Ƣ'3v@^oQ./赽9:B =wNdd2c̸fM*nrc^&|\Y# Gi྇D^IpMڙ]96)-l˱\uV0;F\3{,HpAT)$/ EsP Vp5j hQV^Zr`I !D^sF=sV"$"Ei(u oXS0MW 9sarDpdTͭ(rg\{e# \"j&q*"8UT2,䱁^& K%1Yg:RN ZE3* _, ٕN,K)Fu`KM9m]%dm>R IP!BJ{yjpyvޢ6t+i>9:-PElkO>'Y3]XC" *$גQ۹ՙ>R'->'w-AhӐޑ~HlzlGxM2]3to+' 9ZA ѕۡI|;4mۡ+ߵ`~ha$Xt˛ه {n\dhbK &Ēاwwb4^IKqJr6oUz*?>کIY8;k&eW;^4$P s1bBcշg3-N6Z@yjgFWQdT4:M\[ZKk60>ܞ4:b˗1!>C1v@XQsc6\ xgӒ9ʙ5Ī!(ګLkU+P;šٳ 6z 4isSV[PA;0| lU;dF blѐsR4 -c`؍] !nDX?>܁gz?KgOS󉿷_OFy2x9{FF{.xy kUqI "m+/Gd/~z}y x2⟞ fWpgşB7şwxYKwftG7)!ݽ RTX0)Us>?&7:qM)6Y [тDŽډ,K +3vPp ƛKkV8ME$__jW׫`,}|*S3]~7xu?YԮ"reWC԰PxGnt{>~jF_xJ"(/\*xۄ:ჯI]tѕ2可Ru5QU i5 =j)BUR"g%id3J)j_&rh.k6= 5}ETC|$׎>JP%|anTbYZdlKܹ"@} 27Y /5- 4k\bmc\(J/\ކ\*y  pnKX+B|*mz/*1ٴ-_"#ge`viP 5( nF?3U\)ò#UіJd=O3Ɗ0+ Z5~2W7ƨO>&&_||kΟ L>|_ǽ2yQ7Fn@ϊ[OcWA)!vRB$"Ps} E  $U,Y<&¹@9q ƭP4aDEH=@$tp\*KGeJY( k8~LW+Fңp`pғW2}>伏C8伏C3kLhv'Q\2[Ow%LNwhI rta׊0FiУ=غ^810fI8 \fg-Nb!663mZ"P:k6=,Z(uqSKLz>SzlˢPHϷV/&1θoUN,T TCA\p n4CGv< >k08W&3991\fD bg5P  ݃qz+vtZT9{t˽A ݯtEA{ K(nbh,DCtC:C#'SLBU6PL5UT j8kdh%" J-z$3nqz懲Q R^ZEC0.&Q1<>oWq7ujIrM]|shܿp}R)Fza~Yt(a _llpd_F3"$ć y!'0C~jC΍]yIyV*%ebbBQX7>'Ck|@E^-95QRk[o1"<td~Yy:}k1?O\awjt?}շƪncS5VQ+Y*<_[[2{:,6g[Y+%lJ9(%6h*6<4$ssї`M\exW8e>}X݆$@XS%~NFk`eSFӓݤ 晐,ֺ{*,SImy"mn^Eqgm>a SgS{|`$*s ֦SjϠ[5LX\P2 pƔIR"m'u쇋.8èBl&l6dk *jY}>- Q6`7V 7 v2g:RϫxNG,D%PKʑ1,zs7 NyƼc.BLo9fL0)6w.(V4 [e&Ȕ&H 0bՌ`PxAqG䑑k/ !#ť 0VPYF4?a0Č5LH>w EZqQy#'^ <&79+ MȂXxDWDkA1Ԟ~>(0 O2eG51^|Q KFg>v) 0RR H.L^>8ՂtKu\5#!#%%—QL z넺~0|"FE#˼)#@H~aQH@d%އl".@gV33X &S$gY^#߈ vIp8dsbBk@$dARr)dAdaqLFTѱJci0=9ca,$aD2VI F>'Lp+}\ɻf+D/klQc%#:k#w:WZKWmFl#thz0Js=lb eTjfPFʢqGgD@`O Nґytułĸp' #MSIGzzhW0֟дOyz7 v~gWN>@XT_ZҥxJ-c!*d"d[k w6&[q!~JZQcb5TfuAbD0̆N טh 2ɉZtp1tW(4t@p+.H`1_Й"vN #?:W2~8ZhE'LBYB YOx􏹧Qãt:;W=,oҚfsh4k f>@,\@ KM@>0芣RB0\+PT̀Ux/6{v6 JrC иy\2S(ι-,3FpS%RIE` rUha9/T 51 O> QHnBIŚlّ~˴i+w?:A@YP Ջv)\Sg=%/15ŕA[`E <{c e5ꂴ!}l~ tgJ`8oP~{ ])s/60>Cθ>>\hk%*v_ߏ,gLź20ݠC5ZJ$- z HN՗ =-[?us<-Fs\j2屫COFN>_?oɟF7|s~;| @mart/!k,ݸ]b4׃j߷ ?Pg""Ei,c n`p9WֹXOs}DgLCstu= C\ADeb`GG :{]42`2Wly/ٖ<}B–*Ŷ|EsdIS+LS"{i 5(j[$0jF>Ch,Oޒ蹕F{OLCDe)XbA1yPu*u}ZtBڅjԛ:Jp lsӍVIn8fME;N(*)~qk <ޭаm2~k,mf,{(b_;!vu:Ol :&$(uͮCS5v)2 L3&SU)J;'x-Q V&$k*{r8aİJֿ?l,\Xo{7Mi2J>Ezգ}wk-*MdRAe:kT`^kOi5}G;1Ŭ_eEXX8 _ts|a.5L$ Iű:D]=5,{'5:1F+KU]q*W7U9f8ݟeqZr,%ht7R3tv=[^HM͑g._t Lgtsu1r$~DiU:PiSYܜR)g=Em^<ٛ)~寷lj-1~̭c}H7U*%/яbEx,"WҵiwzH/'M?:ZT}M( 󝱽E_ol[hF-fm|oZ_|M}`|L"k.'R+`fįAX:= b6Gd jƇG7dN`zd(qt0]3}KJjD&Ak }z4F(+1r*$齟FqݻEcPY5opzhBWT=|w:ox pTn-p3T6`b I5\Y^&6V"f# Jlf\]l,3njQ+l?ȟˁy'׉o=5o4C8:y#0ou~9-P<;7n⌭ i|+4Z\Gwףٟ?8ʅZfl]۳ON7swz?iN).rz?I/O^^BP)ܶ vi4oF(z=3ov0}ѯO wv m_a ITjAe*XR)it48:xs&/=$gVHtQ0~"3hCqݺ3r{ޢ5\a@ϐȴ  ]V_&n˭6@4ц"yiC|ILOO=XM[5T/F+CoM L4~R`%|Q$p3Б2)y Aѱ,zJ%Z6:. ʱdEU dF2=uURr | Q2hp}r6n#2Z"SX (7 &Tje~L5ƫFhR&$5=սvy_ۆ>aCF7'[;e(ad%ż&͕^rIv-c^)")|a3 jj76 A.+2/Z-M &@ѧq:%UoV4 f$ۆͬ|s%ELB=x3 H@APiI"%c4|QfP-w48-_taCV9RVq^ !UGsUN8jZ.(JEϑh+T5B!0w!*w?m %).<k| bݞ"=;#t 8b ?2PI$d ^'UN r0*1)v.VeO_~oaJaJFUS{TksoL&h7!ѫSt  {\#yϨ~ٜ[γ eF-9G|*5c~w S'%M F 'ݚWBi'Ieqׇi#;!@O_V6 Q)K10oN8n0F 请C!D" $ /OId=]s4ч9In$13&60) ]fV 9 ޜ ň*\nJIC%1YV|m#!I R'PR>|^Φ5#]P֜4VkqB@&"81DP<ׁ3\&yI,X834[N r'8`v[%HG@ޏ>R2cK2~)E);0 VPmθHg0F f{Nکt F2^ނNA.).OZc4ZQ&ލF˫Xٱ:8ql6T dV[>.eFvi>|iQ9Hxzr_%1L9 zusoiN(N)-xxvU}rB, lPjy!{׺NL$/G܋ (]+@Aw'Ifw7˕逑5ckCQe(tq!j+l.4n_'%)y?AS(p][X0/\[e1ܑSuҰDig$jC %I+P9]&8څgN> %F3aH=6ux*B}YQy)/k!u).f, cP\17{Xp+It] ~<əvoNOsAvP21:B?=z; G_Nyrh{>+zߌ7# *e"y_e:=(_z3:?zhEqʈ\Hj|mX~90)VL4Xa)7WQA&oJ۫& `Zvl  UoN^ ^ժΫ%;a*άOWFKF q-UISHImI$ECD#"eha/J& (i<+rU"WF4 k[<61zE=;_6f^QM^$@c ru^4%7fK7H(IP#w'"r `] ڣ=8\9ą0|rٍYDb r<8r"5:FP6GXFQ8'!.N^wBΈJ~_}QW C5AB2Ƽ1EP%2!1L2.J -pd;A. REq-lEa#rV)-2gHhY j5 -Qs2Hm (Įu ,})8ONIpB%J1F/B"&e9)+\dÿL6C@1V.98ƴd LA'IAQ! L TA\^5+*/R ;ﵳFr:S8C]@yަup.2z`4jmMf]L$*+5ͬqYNR>2.'_5\z{X *$$ $xu'֡8ow=S@'%'$=Giƹ^ C\736ֹa,l5 b$*aBz%a,b{@|%F(Dӕ}>.nr0 n4NEdmLδ'G4<2y`D8GonHcD"T9,C>DCTnyZKVZ#֘s5d[$,;wm5dK;m N87+XopXti~cNAa\ֺqf'.\ ڵ^"_2Tk?8\\.,ʽ۾sZj]KZ$L*>~2%z#6+Q|5nzE*.DB,=*V+|i P 5Z  C" p цw̭[хW6B )k# d+SnN%5oLbQ !mtm#ʻ yo4n+NjL9nX2ء)3 5.hՒ47)RFKO؊IIB Ƿ^ OʬL:;E,m+6]1>`R*%S Nʭ“2kݸڳ'aG~$J׾Ѩe`$屭odm)$T>7Y8V`E<":i4a$r=j)wPwU:G0k>l Iȗ󕤠0D9O=+J8펜(+JU{rH ʽ9+a\QA^J-)m(TqkW [4Cj;Go_it|SfƍK BHsNatֺqf'.tq ,tn9²3}z ( ,PEHS{%RHΨܻ*'n^ Gu\TT%AbAfn&0B G07of# ,p)DRb3iبOcnfh2T(5>Ja-I$X"qrb3pBD[汷XZy0ng99ԕ0%I9z# 2eY%HreNg&w7M&^]\ $֋'Ǖ"3\ 5TWdpuΥ>l- |/=jBZItE(KbB ,E% C9,#`Z@ |OTJ;E?@a?tCEm6=;H緇7`(S":,K)gB,0}gk0G3clJXDED'cE 9!#y?ikwj].D1s$4H;Pd@Î8%cJq2ʝ+[*3!/9F!a. H1=0,+F9r7PoR55-Bw^e`8a3j E^dю satw y:Iېn= ?w.79 ͇LJkUW--sWTYMsO2𭿊T"!ŋgͺUX/ .񕿹QGhЃ[N'Y|M9wO[<)akY3F6R8ꭈ5~M(^yax٨Q# FFӹVG+_Wk+$&a9:_K V/`L=QǸZR:m۷h+GlˡoMAȝR1L1⬠>11nf4̷Q!L%u1ۜ(%5ے]z>[gR"/thu0feaXZ*Qj1/R snaVh)lp%4:1VY0 ̕1|2#El*ǝ"Sflu.lSY8y=u#R11RK{,p2+Q.lpK<6H1G =3b!ȥ؅#  2P#AU\E*#"IVF9 IN-M0"81DD\XgVp@OkO$S-)LN.!F5qxD?s?$^"SF|eG|swBm:!:]}簽8{yBwU\RYL1J*#r"!fAiDC^ܣ, E* Y{n k B$&XƁp8#nBtkTx ah Ē/[*3C?W.ܦ`3&9y=rq;O\@_SjkbfkRoTrUJ@`q Pr?S JBs)A0~Xs}X/uh3`m|'aYbm WqlCtM⭡J#Β}`J%x|> YO(1TL+gx_<lŨCl&vx0p$g冬(#Lp p Y<7ԨG+ixޫ4g4 AdD\g0S΂^G@V:+AӁOSqYq5ݛ nmA GKeж-RKA_tMoQQ|P fo+\h"TuO^tG'G~|7'5\WOډ:(Myk55ZWŮ˱ZHujk2ӏ:~f䵅ĵJ/p]?#Ω;|S L/r?[O??{zz5dt;tvn'CG}O痳wI4c!W3gug|g>|NFd[>rvG.zٽN~{ S[7^g7a̮>d~xh)yq^L7r{ݟxiHyr| IrCr[ïν]}kFJ)4ӆg0'^9Wӑ(Q*Qqx[?]Gvi^#U¼\ѽK=X ps'Vix9']gJ@磗p1^k^v}{FMT;<b-tAgHu ^`]zi'-] Mu']Z}:M`{wz}˗G(*W.p8{~<Yo70 )Sf@K?˄i7k5 I>v8{> iS&OAaNQ>OG7Q f:9LM~OF?]çTFx^û1ᛥ7~O婾*߳'4tICmҎLoq|`Iy?KXD'Xa jMel>NPQa A Z&E6o YbE֠/Kw CHAd,[r j,%IIETY( |c%XvB/A sHC.Wffdf }Vs-<=G!s?vg{S퉧L X?x2*Z$Ya"kRA ܂1څ=ň w7SkT?m58H(5ܨB<3w(l>>9D4W>i%p|RtI-QS2Α \\h8%ea`*YFurŝʁk(#ߔβ;E-իƹi3| Ms5bEqb`;3s*=K:Kg2sN]CMF-+E)4(^>s6N1LvKY }ff,C(WLcí  G=!B&sĄp1Ma'F %`ĉTqS Z-D[w_j0yR#r8-b8VCUPh`j Di4(fj[E#=G'q\BV筭'UJhoÉF`+1#1CaZSl.0c#m-"LU ~h_aX-&&:MhNd \ҁOyDz@yȭ*]uئi$&[Eh;fn"ȹ5_uۗd,jI!AFc'+ =sQO5o3ʤ+8פk#Z.bt!n(鎊{n*uuC\s -;7ۇD];=-kmX_E=p}/|l(F( K[,)67wD٤&-"y̷̷m嫟j/z\cJm.8crƹMFU85)6HKiUHKCwP*C.0V(HayH`YdJ>0˵Fe~}X.B[:qH MYQl o2h*W!ph}Ww6ϵ [oA2SD>e&m}sO{V$iqkۥpmywb򑟍8j 4F bu{(OJ ۝j JWTx,VCOqqe8̃8TLi',4LjP0'24 yxZ RI_;ӇB?Oك-_ް޹PTNb_RHapU47T0YB1= 6B˒ݩqBlާM$bnFũWaUdII YgutAS:|Ҍ̅ Lkb]xRn`n#9̚k |UGh}ˬpw7vw7xX BB*`ER$XI64Dz Ry vF)FO}(ځ vl[k L*A/L)W쾘N]/8U3AF~+8=$}(T9IQ;DSkp)1Yǖ !8!^b9{êpT T)~ V/0mKR면\AcU$Saߚ 9"xhgAKϣLz21Vɪ%[7GJM@ue9e r:Z,U ""BB&ҾBԝ %0?@0[՗T*X3pa;Dc/wLZ#`M78E ' p/b l(D.Si#S"N^-vT{'OPrtw)eP._鴾1CƼ{>W$4k*DJ#ߥ52&4 L&s;L0B+LPl|a1o B+XbNU̕2Ca 0Z*'Z4 5/\3jכ`afCMF}W*M?צ(n8XaBz02].6T%߈ۻ^4jGvqtѴ?It ˗t2NN;?6?ރlWmu9B'7hkg^snӇ˰s*/ΡYp;;֌''udޅe = `8878߁srQ^51F#9z0Όm8z@~FФv>*cf9#r` F98r .o&aB\rRa?kz;cwnZRFr)gnbuq~zʙd`[^蹝` ,, mDCN@8bC6G7^{ Jitbx)f̄3 `5/0,"|ظqO)Z盺sޑ5U-)ӹ^q>-Iu|9xO쟸痯^ݾ>8Nԍ|;Op޼}sv={n2_nݓo/e/g~>fa \ԝzc?N{cNG_zo`=q kȍ}_/|vI-wC7x\ ׏q|]8RcDԏ<x#?G7$%=k\6Hm"dZK U4-. \U ĸ `J{ӻH,~؉9;[hr8ه\Yh {|4;s3ƣ 4垨Bf:~.W]yKz5L?֎zM?m`x2Yb  ag>u_TYGUoG_]|@oǽG;$\ ^=Qһ,zHva0Ʌp&f$w[ MLY?cƂ%qSߍQ31ko?(M t$|9}Kq_7LuW0}'0Ɏ&`V38n&I Mv/Df-7^r|mĂ#hR /]JTb $ !~`0Öq$,C}nt$,H(>P"S1n^'K?VcFyErW$>ę0mg5 # ?4{IB% J 7Ia/CJbԕB`k4Jb']M)[WJB]Ld޲Q-V!h3*ec]}eXK**:kE!z GT.A/I/ ijW@ Iވr%*;N FPŕ* PhJFz<CB1njL x43$;Bc4Q4_UZ2wn; &@B bx3\iN)9OQ/hb@ Co x(g{FJU A"L}RGbV2+|yZX./ lH`SJO|&ԃ F &ȿ'L{;s wfD1ȕ/gMm{;u r&0wv65Z.vb'-׵a V{.\&gKvr'r ĵGAέx|)fn U >8ɨ*0-?G5V9Zv"=(ƙISL𨖑*qXxq晸IC,tdA. Gu{ZՉ G4.5Nk:Eb0Q\k'=+,'XI@v(`y_h[ }jKkcLCNy^0F1EIȅ"l/# cBq,Ě8}cm_@Q!4RKFw*ZݻS-JEi/ 0}VcweDɬr%'& JK:~wJEQn0O.lDS%JD}d.d$5%%V*]V[ԕM)$Ɛ-Ӕa~NP$ǟҜepfɘۇ7iY]mj]EQ:XGZRطfB}h;_|\wnb/{|nc$dNպ%&Q9닳x\xsjnxfDOv ZTa9${"5E*O*w͍G eSx#~X2F"%90i&*Ɛ YՑYWUhAz ow"ȭyL[7@vS*mٺga|8.3;_~"f3Θ"Jpjd1< 8X@Ȍikw1mjogTD=:/m- I(S ~ڀ?>nχ{[P_2*T# y5X ͫp* C~HVAT$ HJsyT~{-Tq3 g!Ǖ^WY::-GGĂ¿ ;[4lUpUM0w_'iN0AwڑqᲚ;y!g]lw?ğڡjq\U&vRɄӿ)\d\h,P .UYq?Q+v(if4!Km Yx!<`KXxEHC>OSc1% RXv -6XJ|BpC3 fʽ"^+p|`VR?o_M-+>q|Q O)#d5mo8|%3d1],‡ {p0/e04ɏݭw*iv97=f㓫ɜ\wi|gл\M'byvŻm?=#"lSlq8R\oh..t|-.zpёP Q#(q1w|pVB!z8cbܴq3vˏV%'+Ӎ(G +Z Sg%jNFg@-lٵec:Ƿ_!!\o\6.|۰``C1yG`Oj%rq^f+stST>M6iiP%!GI ;T+`K#,9=1zs p'fw O<Sl_ Ą$9()eyߟ[J.ry:/7oKVIuv[zpDljJF{L:?qHCrE-rA6;H#c"Nd#q;8\t>uYK7oK-q>w\#&NTN!nN/;y"uf #M艴zo8qpo  qsxZc45'+pB:pen 2I?3)l_]9U'Jj}Z" 8"ǔjoB%9 VY)N A#t5rjbOc4Q"lj= JP5kHl/i1V=,.;9c"P},Fv篟1;|HvO$,v< Jq:fA,v:cN4g Ic#bf2D/ˁij^lV F^KPdlZ/xbb7PW7Q[R8ǿ25~~ѨnzQ!5Dw$#sz(fzhJn] w|ybH=/+1 aԵܫ;D[cNC^d#_=}cEXd-_ނֲ,S,:*1^]J\02Z`ݯLyg[&Ɣ 5cgחs|((<|DZ#HqF9:/}rtv(G4`D9SR| R*v@bgíJ8I*H,9ȭ[}4)P `X9*F C¾x@tx@;-QIå JZu2Pmy\`PH)81ﰰc K/ ~ Cd 2߆ϔp[':*w׽KgQ2)e贌(8-p/MU!ڠ% EF `5=lwD6V3qozB$|h+ !SP<5ۍJoRgsBnc~g-6 h{)^֭5&Y\h66!2bPEe09L`agj3vTHxk9ʩ0>b'q"\)Kx%\3P"-UΈ!RKGbh3f ~*^; Jъ(fC[ω 9(bo!2"IGۯuR{VcɱI }aH Itp>Z!.8?fS* !@җ/~GF嗲}YDjbO%ettz=.#֣tmTE,So-, KȻZ}|P rx臏o/"Bzj+?\loj)ܙ/K)<,{>6hDo04~X8dt{R4u/! GQaH O_jg v^lʼnN"O+c:R7 MG:FDœu5.({|c/udp4Tn% ܔ)fLk'S$iB2YQ"HE&V !"E"xK(Y!T=-݆ժp-m RE 9#pWُ?o)[ e}lzzyu`ng&Ęv,X=8l% X>[o],^7gWUQ޹ BUȩ`i?E~oX<^ Fgʚ8_AewvEaVVLlnX!r0` 4@d5`ä-Q2++++r*+NBKQ-``te@&ʉDFc8:,]<])fTmgNfP!!G}bsW ̻|sh%bZtHpXS"Y*ҿ~tCd+q c^KxjV垁nx:ܽ~a?_0,<ǐ3؜?щ*~ z6!I]WwP*Uqx N5Fk Z7 ںs&{k+ʹ3k5jҒþ Zë.Jţ<9ZX_c3 "4V5A*GFD?!4! Ý7юǨWvtL5R>F8[)y3Ly${J,FrM/35JlɉWb)R F6u@,!$&~O#6A;2cAy !-f10aJ4kdȶx$sWH]둬~_ ha Zh3:VX5>"Md.jsm}ҭg-jYrݔCjq^T9!L`l&~N;&dU|77M·=7 uvmn_ϭ>CaK۞hW'5z DuݩxT J5w1i1--'̞+睯>/N 縻  ՝3p?MTιm۹yCB3Y>v},pv/휃{h蜅\Ե}Qk (o<Lr{VG_:;9n a%e_kʭxDy~xQXE܅Cf\.'`?im 7\ O%T,h4H-V Ɗ%JXϞ%'gI4Ju%}cJAa aCa-k)Շ%Wvx6#7w4`|aWb:x}#=[3lA?+Tӌ KvG+?/y% %/'VΌ͸>) upS*/7?=ݧ O|s N7MjŚ )yOaQ JbojP)vlm֏17a2Sd| _lqf\ԛhq:FW ) ;D;)E5loֆ&FD>rJDAx{F.Ec,n-B?!H8 =9\- Yɔ/uad=ybE\!ݡsb]As8 ~Lޱڋw&F%t_TGW}Pswg r/#SҚZ~3n] 8,a%p/EBUgNlXf_P.Hèi3pl4@В֐>_RCf|v-4U+wd!O2IeC,´|euS6GVԅlVx*0::(<:J F.\bNoˋ9ޖsCm'LbJT1"j3E=yͭgـr;R #$/, ;D!"BgVdq$JhaKH%Xp1 @ JQ=gX:AJf B ` hsNxAgY)T@6hwlMvX:RDDD*2)Z)T憇=C<^G"T $F]Ǔ/aPϷ t Np_Fww8-/LY$/n Xu[tDL?'Q/cLf8A$MDa+_\~R6*2ظꡍQDGR= JB 5//4{#ԬWn;Y7UC4)ťjQ EPxhvKZAjI {Ԫ!G :dB ʥB`?8n`.K :Ҕ-)\F>EmiL~]?j׆r[eMzHHպR=G&jɄFÒRWd%T;ÒkF2ӹebhM6EdH`:bNU Gu߯RӗEt;FIh 3{y/܏|8P09X>KLdJXNx~ݤ+,"EykW ~ ,Vtwװ).]}ypXj<Z*\ tԐKAax~btRx [W׾$ej4ŀ}1 XsS9Ba7rFFfxcWPS+Ck`qr۬w*VhVF6?RX"jO}E@vYd][ 3&,YhIɼFG+9{s7Y Gnd a-H l;kH)(YEH D$F l}p >Ũ&rD-+h`STjJ&D<װ\0,@x/bH1ԓxl #0E90w)u\S'a0NQ/WC8'Ym~mj/eF~t8z[X,aOf?)J*)ʺzy79.9o?^R{KԨ$T Z`@seQL T^Te8FmS*YWe5r 岹&yΩj*,`[\l&W?FgB|-zcF/,dT^mMxvemob|M3kG3P0#vuG\ЕTes0׳jk]jp:G Rc8:{8,}}x9;Nbky% z<9#+S|T[um'kd?\~Z$@Ā4@vYtW>s+,ϊ1ftiff|پ|쇯k)7Χ| Ra?|J,Q30._l >~_$R r&jnAP-ɐszV%%Xgj%# d:Zh?i#d_Fۯ^}ð*8l&Hzg*5W~U8Цa>unIؒj;4KnN-G J*{ TqM(}Ż: y`.HuZ/2ݾP~~-+8 v(Jq^)HTDRcLKՐ=\riYT׿[82c7FJ7'=u>;-3"% t,WEg;'I'󋱴ꌽ:) @$L@B<ʩ+8=>ǣH0{$\!׺E~ rX9z YۓW_jVD0BEVrH3\bGnQIi,`.PGpEL$Gş ! K)'Qrop0|Y2[)ݕ ;8:":{$e=KG0D*Æ㞹nkAd K3/\n]9bh1Kgg*a-n$՞tv[!oNk:~Y!$uZU ޣ ܎!0ن%G/>QwɚFQ,__M_h(f]3_⒘ґR6OJ(Lo.LvaQT"<V)#R1+MD0$K<3v6h$ | h,u@w1v2S~3(wɑSlyquj**񐵲NEjba{ ΤjsER;&cX'mZL48BH?9iOtz)،J]p T 1r,=ᗒc+9B@d]Vɠ}n 2 '.A_\ ?tHRӶc}1.7$}* 5%oC&wcn4"oz[_&LH AxM:9 ]vf}8 h %Q).00Df}ߔai Ϝg?toC9y rPhirϊOhߩdn/ɂ16(#εlB])xepՍهf) &(a\9\i! !HJ׈Io=5$H+y4zt$hN%PTHvVWf4lo_?Z{ӏe"}n>a𓡒f8'~$~|cʯ`Kd\IqXіq>Tbf&ٜ sm<ɑA6c@#UtC~b̨\d.OK|/U[9X,#D ƪ,p4G:ehSwoo?>qb^e;-|y¸\Tq{ l-\]D2s&-9![gIw=ZUlhb-5z/ #JTtF͇}y`5mO|o[Z^J~sf>MBIq'-I<:D4aha cKq!h,a)E[m 'QLյT5^ Sr%*DBXɽBKB~K_QViD s : DXWr::#nRSCm B1(I-FDj >Fю|*?Gɒ,v>mBvSq2X6 a-Lq1g6 Iˬ뷞XXIZAag_D\h2&A:@0QTkw, *.۞XenY0o _[&6k3M`ę*R2FMmD[im'x/I-)BIVzyD\hgmw4vc:J^mO6"yo Rv %h-XbuljN:m+F!t܂N: lQ0L90C9mM@ezhp yB0ToK6VًAM)L8FE<<G\?O6tp!9liBۡ'qg zZkeKl{M( @XEm̈WDF 1:/lW|m>1/mlK&J9/Baδ|x'i?:!PQ>z?ә䇭zxܧq! 8w['a7T ^Ilw}"YIw~|[j8OẆ+ &M2΋\i|{S>NhS)X)ޝ.gH+EiM|G)RKG\zY7IP7;\̀hs=cAF.{zȘKILjV`tk 1b%*lYk *Q} jfEo&VWٳpF `8J/)pF41 H5Igj`I?{{ޕ[QɼbQgrݣZr$;|cQ U:z[!-Z%a+OGQAyc0÷rQdOMx\8M>ߜetTq}ϞYJdgsuOnQB0fjNTrɣ{gVqqD,~X,BWn30m6Oqu~j!&'2ϡ.|d 23_}dQxVKspٿM3L4jQBtyU o)9u{q6ۜeKx6cY~8^f׎&J [M5iֈ6fri@0z$~!6`KMn$ւ9k 9 Lm¼bGsΉL:BbKLcBzdƪܾg Szhޯh:1A oP`msXKxSrE6j5 `3VjkR eVZ)[B AJG|G`t -RQ)D2.C%' E)bR>YӾ\FuӲVa=|Rœ13~1"}x oft1&· ƌ(K]Rmi3/]py+ʯάoSIF">DIΘ BvS!"ϭG}*kfL*L=>M|G{r0 XY:uXu_=OT=OrG"k:즽at }܌/W/~u|QzwtgJ[9B/,v8c; bw?ĆЧ.{9"ܤbk/EQ2&ԝ9_aCwFHNePȂІ }pRh^i($Q4 -xB$MC0Bg4d̙ιA.0[Υn|fE1f} |ύgFv7hu (A=:GTT #BJ'sJhwwhEi 7Ҩ*R oQ,O{!6d#{/MLN&/ \0ɄIOBKbOr*>x8 hjgGF3hD3ւ戀H8HB<N#}lR>6ZǶ; ۂ0E&xIC|4oar(nSYj%fs]ǸN872hN GҰ2"$b9Xl%~WW6EF]ܴAJ✪.'i3"ګdvRVBR66+_ )^R}*Ԣ1yKCiDBIK58(mB`?T;~klhZmf)Cɭ?~~:/s@ʑӏj>{s$s0C/;GѦk;p|5 ~pٹlUg/-7]n*ݣL.&H\"P&54rgA5R@NP;:e[}%pLIѴ/.tLJ _ :+@cNDBYRNN*YNf$}'{6L0/ 'x4[ox x`{jW:LϬ{g*bV)LXݵR&/{G٣?ԶgmՖ6a+"5'؍훍 S3hpC&o43O de(/ƒY%4N{6C hҨU+jAI=0suDjCVz'8 +,XBs#p6 NQY`QE""@sM C x@ADxzqVsR-a@ Ez}>^nN2PaIH9s%"IQ;|I?4Jl!dhݖU -'*MXngzPE(_ .~!R L0W^5@NpT.d |۸)FX2TKD! !9{H08fE !?TzEgtke&P6#tn탸hUl55M3=I`1mQsQ'bΑ' d̅Bfd˗ V'˒c kt랩!g .nhmr-Ԅ;LPD mX7X7VR9^폫R!?r sOZ#Fvp3 2L4b9jHPSƙ31٩L61kw2jŮck^'_\/UObN$'ɣF-%9"*E,>jMpZÙ9Vy!e}'t hfei9WzC5G'`!0PV{,P|Cِ2%.Ê>vr1Df h*j1c%瓋b/ V!h5t+ReXq{T80,BHAԈZ+s=rsO$DAtJ8JKG ma(T?ĴuyKj09K*fH\brQx"8Q5K3)~ln,8Ek$EYz'.Z* *N'.qIl{(>(}Rj\i!KsvdtL[v-/z,٣IVCP9js*>,ͳR;샭'0uޞ:oqZ./.,NL> 'ł-j{Dcd09zE/ȯa4_jzbG4.M]6s4̙V~DW Nȃt[WTrsR{XG8J0!`a 3a]偠~d~;%՝_?̾ʑ%!V1?>2F"FKj%"-NY&sȚS])K p&OY*)KEwbN U;ϴșB]d XJaG|X׿~/32+ry2_^]p( 1?K?8vy;Ϋv ! \lQ̠2K\{kV lmEJ'B$d@-SyF~;6cMxv.aEgOeȊ*oC;gr2~yw=bdLg/Ó.d?"./JT+d>볤JkAQ{Vl>dj OfCF37),$0q:IXhJ#q{u"qzp SʭЪUܜovȼJm//ϫ&2!rz}r2V77tWN]{~l&P幢I'99Eg hR\@HGϲ>;l44K3PނE2 j L;Zk6qD7]%6|VOXaQ8 s\n1FÜ)+Q)ghi'rȴ 6Qmed"oqx!G#(u1diP ɀ +4K D #F`Ph( U1r&dw9NQ>4)*Ix4ڨ|4<4)JZ^Zs M]ÉSEk.!tN.Gy*7ЋKw0ۑ+ ܶ]UN_ng?g˚emR !YU/:IbaG0aq{r7W+L^ KBqutB4Ir)RIeڷȾ3@3^C5&]ԉ0B̈sN_s &r\ytg!l.P ޔE)uj,:KV D8^)ppv{Q ҋQ)QIc&bp艠P[+MޗZ!_Yr<ƒEm۝NJ;] mK_BHyņɟ^es9_. a81xĽT7 ޮ9U35IZ4;TlKX^h7D,xthJDz )Ct}ظu}fjo]M(R Ґ{L:YndhTQ#}(Po{h&ռ4;4]..蚆$\}pb7q\%ѧx P+e=iײ[q%PUs2Ri1养>_|Eo/_T~ᣐCbnҐ+B.8#8E wz-!~x5W25~Uҍe8M+[}e d^2e* HFweIz1ƥ0v0ÆxYBݲuE݃F)H,fUmf1+Ȉ8Q0BLDq)Fg .-[OiOWx5}N2Hd {2zn=(Ï34򎘼OН>C/r]J=0slZD 9I'E%4 SNJץ?Q<@*> (ZJ՟mQ T룁EKšӅ-8Go YU)D! *c[=d]ܝ%ȩmB/y=;l3lWKpRiY6ڜ/0P Vb1"K(f I*)YFf> o8? )||2Wuy}U5eM%B!nI:LW= } '_d)g$rbRNqk|XL(Q8f-AK:zqnmƹevI%JI`‚9#l),a򊤔we$#ݒŭH LJk(TxaOQ+71zW([ WDh 1]t, \ܨ*xK{Y3nLg_г0ۿ QpZko(C|གw.h)i:FցHSz@^hR!ra K#4 Ycs| x27`w.4+Kb0|0+K>QBƊ(!zY̶m ga{}~ޡM! (bhdvG')"h>3Ccʐ>9tmR yB*/oUha5ŌB ]؃)%ŅT(tB0,]vjH>G xѲq+/[-4~Y(8g\fc<)*a{1Q 65v,gʟ_>B5cQ_c߃,=ށhJAծn?YfW{Wl;ȄxAVh@7?OoYxoyt!OIݠhPZ['q!SdT07&zфsSp'hv`⋟u!(Rjcw1]WKciWf={ ccX ԱI)N ab˚Bk-"?Ix4d! 9mM@\rO #BKKExjfxokQTG39JW7x[VNmG-Ԙm{UesEC`` 8UIsGR\If.›Kbpe0@2Ifʒ+#e rzk;D|CN0C!EnVG@`߽ K7 {(Sau# zOb-n#[)y8`4)'V|X(~L8Lwq>X XL CդӋhW<ɩk8񶷫EtJ$6O% =+{N-溭i=&3g3{uFW#]kN]詮: J Ɋc yx@q3~k؉roa/zVH@,wŽL Ʌ"DvMwAW$Me1a;NDm7Cvd@!OPP V*/Z!IgJũÕށjMKhP 㣽rmB)^ :῅[wEd[ׅ|!;\.D+ڻ_ØlQjmU:AEV*Zq2IaD 1]Ou|DLJEbG'EM؟&U9eVLǿ>P4XFbBчy,!lj%'w rn 2E|>gr Gk;^r H\mIH ܩUl`XTOmPͨD>KUI޲lSB8w)҇g>!GF<ާ5#O7uRwJ,ҳt><4Xlޓ|"Z Seap-nT.w_.!Dkѡ_/rY X?7Y{xk+`Plk׃n aҳ5Aϕ.ʧ"(9uenCگ##4"jSWmG>5F1ԢSH\o30W9NYGz_]'Ck,cTߐl& ՐS+2:Nuǭ>_k,ߗ}O/f_Ij9NF_E;pTKHyi^%i\B+͖0 AK^]_Z\{ ^sUUZ009*f5 7 ,AP*N(U-(B7y&p‚6JLg;I@qQO`">zeVl1\PTl-i)@0ϡ!5AhrlB%F 7`%Gkb](`Jej_w Q18Z^Wc[Qorj$IVR,Ag8'SJ! x"mR( KO'8j5@q`6sp gyN+لNkTes5 rV$Ǟ 3 } I$eфEƣ^{%Z=TǑRg"IFzhɪԜ6;o V0;M_8߰gW29֘ K-PhbOMT_Fw6.i6˧gq27x;_=}p\>G}%@(CG_^q "*DIٟ۳Y?,wkO;A 65C.ksD\ f&!'QЙ(sĸԸgG{`z([<. D+MrWG,!8@q*"ue` nyOێ[,.bGlީ`Xa]Tea8D(d9VwL>eQԁ @'4[XS1%V !"E"xQwzm ~FiUnjl[Owkځ@5Inᙯg_jl3dzz\T@B?݀Jx-|Z {3b3*.Asz V^]/O_/k!p>DQ ?v W!@4ӯ~\wf59祇kQ,~.Ă"$oI`PL3չG.f\_7WIK2ΛK&Vyy]vM>g_.blg]j~w%ە_l uj>7WJ请d>m<9x>{ekeA*a'Gy% TK50'-Z]iHp0lJ#4~J-=DF7Dȱ2LJy!pܖ㭱!:6mnI^t`pQR &;p6]U!Ҏ*u6ņ9|Siʩ1 #%vTLt) >/N7::w35 hTцsڂ-%]u`+iGx5΂hoR $:]ΈQH0(BHS.Qo=2KP18gcZB;܉S\DER=(Mp@s!ǎд7h,\ig-CQDAT p !#x WpevgjBQ냽V)b@qZPx#"vv$Z!ybgl¹&YfȤKRPBe lH},t2'NSxӴC˔ɤ+"4)A):c Taq`0O l, =(Lzwx\p{8m[<b N1f7[wd?6oy(?V!ΑTW[KeFLjʾILKOS͹nW[ޖA& R},Mh!m-4R"Ѷ ;ܟʽR"3B5*I{--kP1#mcr@X`!1mB an-xiK\7qAm,=O0P ĆP'>s+) :i^LL Xr>Z&?9皵ī1WyH9:,,weNi> qKK=^r ;9m26/c@pnrڂӺpTR,H]WBXwsn%T5oɒD0{OnNFTxȾ3  6-=C"ԘZgD,L/bTL%KWʃEWЉyh{!Y ^O8bn^DB\3.0)(Zȼw^+2]xa\éM MxR~==4_ z͝ТFEl fe$bJk 6DK)̍X c0yшRD4~?34ԲJI{Y7cr'CP.C)wN+wT Mbŧ]MoL*ף5*2M!!_.SmC$Sv!Md[HM~xsTҴtK8Y#߿vH6Q }|1"8aڱ@_?7kcC.E#upyn1r3_;sy${eR^]E|fEVTषp'2ި\<ɬ{ͱ(weHzٙL2W~o/.}IO9{ky y|c4{uhw|e wU.ݷnj>-nQ$P1 ai,n&mq[O!jOsL FЋ '1o{R!NxzXx7KmƉ(DJ%&L([fqag@'KZ6xÈ ]֥/,zV~2ES8-cZ"1 mu\CdgJNonT. J(ttjʴBss33?Ww[KyR!Qi"]jqXAX_ f))|{p]9b:9i$*k Ǩc4ރ;QKOu%AZP_kSzYxpw ۙ$GdЄ="9-UkD 2z kF*"'2PTdP?34>&Z.|v>ezV/MWI;z/ߑ*2Lg_2/y/u:;t*|DZPh$Xa\# wwgK|gҴ~*%ZvDm'pӵ5`.F> LKYٗjrŏ0\V5=(!E Ƅ!L&1Z"-P9BwMӎTٖ6]-^U=Y %A#'X'j#CV+PSn,>" Rt<:A ?:O.PiPрc :D28^*sDM468rmZj5quOJiLGS+U҈k7L.S*gP16P!ݽL[1\PxSIke Yje;BN3siG`;WOjh 2Fi/tiSrv H -+?v XHmr2sSX.giJl7^l`X[K>GIʡ\@{ZVNS&u(TƮ´SBktn嘰|ҬF*o4j_f9 &x Q`+#&[g\"^# ^+_~sd/n5ߞ4gy仈)v\gû9|bAg#ԏf$ 6LȍR&wR֒[ 6BZy(1&K5T֯ !TؕB}8.,ҧ?$_>ӋSeH}gQYoͶ3wYd=w^'N_Ri/L=SF" Z;ᚺ y[wu5\%?"_ocov|cJДLQwUM mI&v"_fMbS5}g x櫗 -'C59UzakMh8 .p|F"(oABZϨTiFG6JW@VQ-]f[̶tmnKFA+C'틨8Eng$2Ns<˸41A96Z2x}q&Q1rj@ʝr?]`ӈ>{ YaGtHa]Q̸&E%&"ҙn(u/5~BZdbPyWt7_>ip^bvx)C+¦Y fĊ!=d6OhL[Riwjѵch T GqO*|0,Kj}'`{i!D ObDD8cs!ٷpCg$c\qgS#38ڇ=&_P|Jήv@쀍TVc*_v^GE:)50Q+ 2JiBTkJE !MZ2 ,IyB/ZTC:kP!<|)O$> 5Ryn<*ÃR*2N* O%2 ٰ˛.zѵSNuUhBkR䫃rDo)1%u{p]uo3~dD,33]&OiIҰ>zp@ڼ?$K:' YMm2gE^QU'Q-H^ !nf #@OFD۞`t>"n/uf݆{_zĞ]1(!l&=wu)y=Wn]l [Љ*oz7[W JL7x$5![灖z>,䕛h'jxEx='ޭ+%M񮝲NkwxC{-n}X+7.6'nR)xNo4n<#Ѽ[|BևrmmSR8bkE=h3E#ΈQ$ZE|k2sЧCEoۨ & SDSv")4*~GT #:F5en}gH+uVc_oKQ+1]γ7Y:kjmB6Hdq(Cmzm39zs &t jɆjfw]Tm/3LWL`+ʇrʯva>Rӷ]鋼=nScϨPNmS h1@+Է0b d{ 10vȹr_oe-݁v(ZrPTv\8&tB-~2elgu;73SfrR&%w6PJH*X ZFPB`Q&R:,jYKHgůO?Wv4dqW/H*o,>T}&.3,}n]} o20[{ja>D1c:&π* nF i ?t^7}f4RBDVB4&f7ach $#Xč @!$j/#]HvF9g fBK~ 91J30rfc*h{"Xۊ|hVty*brYEj8s9a]h SΑ/&[OV'#?hE WRIm-cpU$8-LCo&(]9L(,m9-w2BTjck-<T:4Y$Cuęhg*0ic%뵜H`(d &*AM$R*YRgz_%'#; SYgfeMgeV7 OQ.[n<cKju*Ⱥ8xAiNd>O(B#K~ I`W\dZm:SqI]PJ5a2.1BIuBAubU+!%rڙHN(K6 @MayK eLU(jM(g(Ȯ^M:|ѪoA3`z&yyHBɣ^>Å$NyI::vJ)~^ldLP*?T>UDňqA;aJ:J)jK8%wpF (vyuL%%ze%Nt H<h>Zs)|)3>˂LX-Kc3E1Nw+CA$aJϬz;D&'GB3!Y UR(C-lJoҺRCuiJMteI[k.,uVP)g3 Νc^C#j5Xǃ ti $ވY!wqYw?m+&.CW (^=}x4& V_ѫw\0I6F~kCB5/ ~7r<Ζxps$}p3 rS΍ξ^n~~ wwwx}$pm2U曻on#DY~|;nʸVL~DQ#ARE hBbdDB(Nxj ҊBD[P0*A0.^JYQBnk:h Mi֊A#M ǻ0e&5z^ʼn"1W^N 7*zS5 32&ZSD=:w~H9"ں@'c[)G1UQO5n(u)Rj.uk8.fۯVѡ loJ73Cb8VՍ~[|wap3[>L}z7ffˋ # jSqgoY}u3F7,?#z`y/p&/óX3"1DvDL$KruɁ@ yV~Ӡ!X4%iLIC2-&ĝHG/{܉z9O9~dDL9avbxw{i7s ׇJ f@If@Tgg}DDtÔJA&hl A܂lZBhj.M'<y0[ko{_/;C(ȶ9ex+T(pEsk:g]]?]ͶG|yWGRUp=6g֥'Pn[4tuܽҚXKcpG;̦a':Qju/x^z{=63%7@HRo[⭑*%J+W/U1}Qƒ, y.7@[ȍb211;41!3 6%8 "c^!%!Ha -AEPueilPP#u5!GeV 9q3IK`QaXGp, 3tb䞂f]*1zZ PF%. uBG5~vab]cּ*c〚`CXrİKUhkx)#vokv(.ϣRQYcj(9WL ϟ|lo9j*͞K>4vvrkb u\—Q2sS;ƽ5j ٨v%m4oPx|Qz&&xBPʅ=lbо#Ǡi/>GՂg]<*"9A=ւk 1M,6 CA6hfLrY\RbL<}0 ܭ_\.#9vн|rQ|~ܛaZƋ&.TE]ŝ.-)K^ M*0 <$ bbiZ g6$6Exvys2{DM^]F$]uM Jw_,<*K5 Z[Rcz&6fgl< fc*[9fB?t?#%]y>,P4~Š;go|@y_s1hYТ%Ї0r@ِ|"zLiВ3?oKz xF5 "ȏkBlbqR{P춞uJNZ\ݹҾ9jh@mro^D)b06#>GaZ]ĮB8^ zv35BCHH<}HAEG]NQi{8 rhOo?,n?~$ -{ ([M~]n%!=H9BL?\̤SX$]/df"dVCL@Bw1Oxrbjp@F u: ݱ:bj K޵OB)A2* )jROd}& \營h9t!U㑷Zԭ$H*:KoN٨M)JY0Q xC0Z*盍1tK.pqS 4 (k7+xMFWBڬ](VWpWT t/-dbʞZNh!OEez^m!JЀ|"%S\~nʈ>h\ RD'u XEvk'j&$3eKdI`&kSN%WD*+&%Е3#v bF@)I`JI}J&Wdpy3Aj]JAt2jZKˏ=Nor rș gAҢ49W+q)wݟK)#{a{$bE+-pVȜJ|3zϕ5˾6hQ [J٨Z@5;k VW7%6^fCyٟRA$}"W1*_NUE}m$o&Hb^of= da_7HztBq2@e;D)"4etҤPzo}%) LYO5U: .[3NUnc5svFXLP Ǹ,8wJ0gJc³xB-puFXi /!8JntNh_(RWA[4(7r=?CT+r)QC,8fI$cQ\tD= A0WMl S9 T310Lڒ$Eg#%2ⲞZ$Bi}9A\Siu4xeE`vPQDRCso&ՠu`|t=s({t9ӕm *Rr<ͨdj=`^.sPvgT.]]]D6):T#`nJbbtQeEOfťL|8L]lZ#{8 =3~g5~5G@%+5&iOtd6m1Rw%k]i M4(|Kbr՝3p7QteWꚒuwuǺfw䖦27V建] PQSiq3s!p9 #R eXuJNDlLNk#!i*Kq-Z# ]3b R!)[պ4F(3 MG&YqdhCjS=&mtLfiF NDsY7y;UI΁x*A}OZ@1I&VUBPZ6})5T6Ɇ2C q$3Xӹ,42:"LSENyKVÉ\9,rCO E$gyƓ05xc?kj,e9 /[@37YR?xw~(<$Ϣ/3Qr8fⵛnb_-uT8X Xd_ԃ3N$ęz2[m|!os.}[&ɫ0P}-z )g|!ZNOJEI߾Ap7bons4*h/ڤL cӱUw. "nUh/`>u5"؅aP2׏S:N8Nd|?P`cI %YMgzV_sCdٙ}A십,9,߷HuuSMRt<$Dů&(VۉhϽsSnj𮠒/76z5X6]`mQ_M(?t?r'VomGnTvnH%&Z5`?Zlon&?'gTT ()jX3 /laV.Eݏ~ Spؓ=|ôt3^]]atID03W1z&2+Hoו& nhFܤ 0:] nc(Pr zyA%씉q`ƛA`.ܹlr}$$ ymlH2@>HYl3Ê q hb6jԦ)6%ߑXHv08l6#f- I[Isn4o9J'LS$FsLyʼaTs'CL)#>(NI#0s4RH4 I !4 eQi@X@&sء%y`ZXz4 㞛Rpm$#EB"8f3ZbjDz/&#B2rٌ}ρgRJKM~rDx\[% m9MtVtͺYK PD<\x)Ҋ )o69ʼnR3Bjr*) C[G[$Z`p3VPڣ\ 0dS3"4 @ BbֳN{Q%Rp g=HX%2``&*QA QYf'#Q\ ^)NZS+re-=閰$AP9@1e3V 5=%t%k1=M& Pat榗P%4x:P%m" A,NA&ĐD]B&*B?T0H)al_ߠLJ)\-ސ1IV3_T/] jjӀ* c4{lu_EBUv%v3z b A{3};:_Zl!X<;o)%qr9xS/Gx=\Ye%4a_c]ҨF9f熃J^ӹ?K7-B9VvtDi;Aw;2tB"^/2:)&xoZv| o7чWg$9V1L[EAdQPHrtҎе P}_O ßR @JS KATRJ RǨ4("Y)0^R)2 RqFҀbzZ=Yj54˂NQx'5r)tot%ĢVn[jr.-R%Ew0Nf ~yXnzgmyf׃r FYNGBeK!)J֕>c!yB>UmIWݯ$芡HIm0o= e t>en*;v Y37$R׎w>xT bL'u#Qt+#[MtJrkt8-\ 撽xkzQ0Zy]_⯹/&'%Mw9^x^(LJqowf6^oˊsȿ Zi82C])ʨ.3C H>~rGkuoܗW3JiX? mp<ӱe,~:EI' 6C̈'\e!BMlYhUVb!9ﴩĉcQ+%zل}O\?vfUm/!]6ẻGCێ߽wqW Fw>ǎ3%Tp&u T?o \gke~fXHZRĥQξ$9F\X@KS#%s!y$b2KՊH\/R+Q\ coՂ ?M|V rxX1^C?qU q2Є0Rkx;l &;ϳ8i}41e,żv0/ R|EQyST &ӻ;-/Gkޙ닫/fz5WG_+Qgg>)ük3D!@9YٜHc~)]RZ Zvfû' -{lxA9E[o}t1 h+5EO ,b܎ 73=l=֍㦵3ښx|]!c?^Ct"k]=DZ`cF$[Y_X#]rD]|cejiKL$IW0odW1&sg0YpeӸ]N:AE=|߹eS IxwwG,Ѭw;W~1bר WamgWGΕݏ:`$מ!AJZ& 3Ltg'T-HQtR!#e9~r{̤*bHZL p SF= %BUZ$ ɽ"NIU}KN<],W'".C_r5 W[˒ t:dkm"BK"=x`8;iحCI}nSM{57A$!wj\JA'KFDYHM0p9@n5SswRsZg$K ],uΗ/RtFD2 y&dSO un*[*1:|1New}_#[M4ɦ}VTR["Qc3jnTAw%on X37$M`BY[*1:+ZS]ވ>һ5a!Dl03m(3vL8&cDLx E%emT+/gdn:/ J/L0A.>SWu39g ZJ)dsuCvX7ݚ ͔:xv;Nh|D[xȵypVqwiieJz wQXh#Z!X? :(SciAfQ1) cE !x6WWW1)S%BNbsour2d"\,}Z)JAYVH '&b+*q)DsIZG4-1 TN}NAqEkOsꑗ`L02 e9@qp"LU=j&{8}G?F"YoQ!+X'D.%8]u eJA9k*@ HazGAm>"- ٧^r\gW72﷒}iA'gI1'lۀ3(C#[SA=F<3g%J Ű: skaG`oc'陝@T*R祗4n j-QGm^rژ;A+U%t}6&kXKrp[u%>>A 6O}1EһKJ꽐7nee=5_UWUw%Kcn~aN_> Y]^dvl}ҭj[/Rd9vߗΘ*jՙ1ڌ!FnX~S]m`u[uvhm2ulwkۅK(ʓ g(8Ӯrh%Č2kGev< VB[+!V;h* 륈H9:Go Я!8DNJ6X)Uo/ЦBKe^ cA 篴SU[xXzOY!l85xVàMCp!xG k=/o(w;J36nG5wl^Cfk m}xe)W}ֻd A6 SY?_Wɜ>;>ap%B6n}l8[g:Fu3xAeFJ3\=▏e{u-Wdt G6LQV ;yt<ہ6 G|pvm| >YǼel~E6"Bqm/Y|;D K L' ϱs/$8k߁Or PBmQz}}Wpl' ) ݨKQ,UQ`B'  a+t4s`r"xwK+ V˽MU.Np§$J&*/#@X1bUi 2;d{6TQ˻5Js+ , y9O _~^[9R㇕J^]G_Ň~N-Of{AMm|9wkן=78NOJ?Ͽ檺kE2{G{n7_uyR52{e)!fӗ>3#1٤6(\}v9a<0)me-$3EI "NZ b6j!ډBbBj*T XnT, \(/3aFdB20!Zlf ,ŏnW\yυS)2x+9B [v)VZX =\#TTJ ҿ+Ȣ8jh#WS4]N.]) I6VnvёN~l720 Ul1ib m_*PRP%$c9xbIBĴTȕg!Tr!>ZmM@.A-Tr9e-h>q̣^k"2Qr'fU %>2-DNA'Ehrzm7^ mr,O{yɰJ n AqlcP+iPK0ϗ"AK9 POOn?&w幰Jqn{Beau& SAV8N nj.o2wo2'Z5=+8\wFk m/+E24%#ȷpE|i6׌o3څ1PRL 51*^uNS0O0C)}e` _V r--2Yg#^P!A?**׻fZ Wr]rkjB/C0`^Юrb֓NO'\b TS$R.g?r6@z.gRTz¯b-:Ȭ`EJsF/%od ^ҫUпD@S&_ /ε*E:UtR^.%i$r-{ٓ?4ڀF dmOg Vʡ4EVK XQ_k+Ԅ:WLnX>*5\/U0ܑo}޳z^\<{wEq9\ ȕ5kyYא:Fh;op¬@`wϕϣxFuޢNJ>=:o|on)(kp.n=8Q&L*678w ~E*Cfe]@KEm  )8XK3,7/?똍j!:!ȭ9߂jp^/*ԙZOu>Rm@N'f!Wځuj-3_ΦJP{*wf&r>Hר-W(*j'xfof!n_Yݐ7s{R-k^ __V-_,+ ˓ǻ4_b4Ɓx*k9ꑲc^ CXUj{5,W#^2rocƗd@{g%=+Q&CZX;HBqm#SJ=1hNg4n2Ok}ky ڭ EL)1zs_a[[ JD3hK"c (n]H7.uL7eP{:?דd=ITZԚ nW$JphsN;U>y:$&¸}nJ4)ch$5Jcܻo[x-(8/*zT2޳ E$;>b@,5)G e{cpcBGGyztmIӭQVn [O(ۤ ?ۤcyzn^nm(ja! q u(z:n=}CIY;TJй{L<8xpnܠfG? 8}5zpn&|F`Gi]_mpѻF=A <9n߸[0Jlpn2`'ՇWz l}._.F* 4v?p4H)F?nLު?|xYZOUvU]RI^nmuqZhjcsלOlhGa;\=c@w8`ܯZDF?]Qkcx\ɻ#ޕ57r#ˮw$܇"hk;ܶ @usL&)mEJ*Rd"Lz>,]NĻtz 2fJ Bxkg/e|^.a L b;X+k9ۭY˽77]3䐑%3k}r *#Ќv)_ވ3U@!Wj( J*+U3; Lw!CՔRwp_{u +4^yR{{! Ba{ϥW#KUw~1J&}WJeJڣfq "m?KttS dhۺ\u"\nYi&Z Ô+,Q!9ԕ4է ͥ˔ZQDžVڵ :{QၨbһQQh xHY]1ǻ=o )d *{҅Բp:3SB)PZ\Q!@8@0b'(mPRRxԨc .5YĊUYn%Gzy^RTtSU/0@+Y2ieF3U2.L&2d̤J@#1)sTJϐ.8Vw`ѱz29 :Kը\a.*Q*iQh li=\<*ʡ%d#;Q0d!NhM/ vS0Q܍7 |v&<{jZ19?Y;ck W|^$/y9Cͻ[C7.j_-$j$"sqp]} ` 2bpd\#$XfT+=ҋ/LW+$Cd[h \e)bRl)?uϦVj0 繓aJAFftEKچz.y- eTAWh;C~uDf*"URJzS:ҵQ@-iTmgԥ%$86keH ~|we#@$xDR"Ҝ߂PKQ'5ﯛ(#`DriQs*!q'5p_&?R7+}'DmKq"FQF|*1^JBGl}ȗeZm@ 4'$S(IyJ0pOWNJbȭv#HGe="UgMX^Nv) Htpư (YTno0D#~ぞѴi.vQ`xd mfN눳6k5L#"I7ʭ0 D@]rtl%$y2 `ZY?P6C_a}p3*֧CKɰ8a94`AC? C =d@hscl0H|rYe <;us;`y]mv.O{.t=oSJ&ƼpZ#y)1n>"D(Vp10ƧK6ۧE@*b͋ "Ҙ ӎ eLjӾaE6Q$ /:ZG:Xjq,zgVa3 xOS,oGmz8n-enqSq8 msVCֳIԫ#ɷvȋG[My$yxLI/q$MVQ "Գ# a.TGKɠi=s7 ZYç{{FzS>3E+ȵWk EY@&w & Re YJLPeVs!OuFWe[-Fo>}1z;MnqCl#7͟-gm᡿O\>1^yJ% /yqR`_]/c?oz\wN]6:p K*/ ?@^kyS )EL(8nL`F PKiG42(4Ƥ (+uTK>|ct+Ϡ `~(a^Lp?̿s‘HJLBy&D3 Dea0BuO'~Ȝ|qG_!2Y˜tHf2X/r/Oc13bʦ 2 -)n[s;Qoej{#-7})1(ϼW#̣ubM2W75$ŐRc πom%U4D@@l" d k%N>) ߇b!*y̨֨/C>$Islm4?7<5OK?}-VI C'α?3 6M[>u yԟ㶛5MNmmGo[QAxưI6-Wb5.Up1hA3NNA$'w'-Nܩl"x'Ńxa#UbB843lQc|݈˦oO0YDO!9䃆A Jz&#Baiv`7>qkOS{]v-W:M9(9A`Ibaq"F%L" b.fJ|P\<9'ާIĬW[]W eÂ0GRi2& Q^Z#fB P1&3A^A!Cg҅Xe1(v':8ľ^vZ>K|&mz,KT,ڔiKJ3)Jl2 T94˄Jj9dKPHِtj6zꂸ9zH t b$ ٫C4W@NX)T<UZ@ IZbBlPDB25Ӓ/ƣ*7;/f-7_Y$"L=>%8y{>9V^WN#eX]=2uc7=."~s&ohw?Lnn/ Sx3vz6_Թc[F\⫕ no^mqFHϟ\c6wп ))ySq&nV:GfˣqG!ob},'%A34;Ю]/IN즛F"ؑ{䀍ޔ9.{&a\뒔 `cSE_\ŠyOpRrwq~gɨtӞ4"hVbaʭ)ob )%!%ӌ12^ڻ.! shmjiD͜OcbB z7E.9\8ZsEnvRȅ_9HDirwb²Uȭ.[56\ -1wCcHTP]?>D̎g &I_OȆȀ 0Q$!Riݙ_w$J_K3DS|tY8ZH$hF)D܋;c-ePNQwtg>Vg%R"Xy >cxWcJ]b_J@=8B߇2zRst+B "Wab0.$$;Ọ% 8+ɉXJ'E3,9=x{Svmb>')UɎg,!H{mIe3.sz{L)~y{{shnZijH^\rGQwq ݃۟0`_c-/(ݎn9CDwjpZCpZHXiaE9&flGkApX4eY3 F"VI0"$ &-)Hbi漪RD0ٔ|猽uqz;ݔBC9NE^~B檉1,f8].dz{lCd[|}^g`}5K sm ~_lE Ej>c) ssg\>_&Bjw}gɮN#N/s;K]P\ Ckѓ=7`*=!G>D)Vhʺd[] BNgn:CP[n_4Ժ51B|VS&Y7}JH vT9d_7L(] )%,K֏~SGRB z5z9F@z3ʛˇcVdV޽ G5;Nգ ճ$LMMmر Hj,-9/QSi #P(U I\BP!ڴ lKf`)Z#YD D0N2<8|d )P3SVDU)AL!%֍AٹDSpv3!h_#Dp,ˈ187t}b&e * Б8@ mosHַJkc9xh r s buodI!3ʬJaS7iR\dKzL;o8I{ wG1j+8[&PK[.\mv[y@yG`l4F8D\eZqM1 7}8vFgO4nٻ6%U 4T^@qubC61E$e[wJ"E'Tׯ`|o[;z )~[ajd+g^M2eS[7Cpojk^rIs%{ ٰb-[X4OkK^riŷ*uQQTѹ5\b& ^d*a@=D& |8a5K [)ج>!2y[&L~nFaP_؍`xA Ayk_ܵTA9-V@Jn{xgt%U %K)(x&ڸxJ uy'n`)'b񇙈K JȌ* X}~sOvgj069N١jyvSvU`c&C3LV6['kDYC@.)#4\KGSdiƜ@HЄAFG:BqBO"L`<Բ@ 54Hk#اzL e0ꀵ!(oAKU1CR>sW2c" 荭dh弋QnM/|Lk7T"Tb[q #mKD4a$)MPq)j/8-H}s#x+PΌu $ХglmUYAƆR|ښ),R-aϯyO7c3Ua1TV48!DWXPHXנΧ-{y^9)LO>N)boASA)N`TNf0Ra'w߶Ox"[ΏKPlĽsvw˴+V#㫳K34|3n@A1#wqDoG4k@_ȳKOQy:^ya4χ{ C1A1G*l{[]}нZ)EXn ?_OBތ{>*!ⅳv zTbe@ɑe({BOc8 C]'IIIIy)qkPBK5!(%f1nȫTJzIU0^؀,#E,aY RcMkK\8=Fj, ›ḿ$Ts2ӬۜFIIjº YNax- a, *4.Eˀ<#LyAJ,@n BTDQtҜEj?X331빗NsE-k$W: NMܲMW'%׋ aT;yApe, (Ni&p f5zJ 1$mm &27  ey+wg[on99 -yG|~lt u~?:O{ݼϛ*\du:^A'wѿFw 'W -'_?U|xt{߽|Ͽe=̶ް?8ʺO?=?={^=ձK:)1A;P~ `fRf[CGbe;i.;`Xn:M|>"MYe0u]?`ui>}Y.W1l<gWiq.fױ؋%L.ث?;рϦ-Lg!$>tW^}w'^u?<;.H#"tnL'IAhg~H5`h=$}QF=?OO~99}][Ae lɤ\YYj{Ȝk+s ~uu tp>[ iqLofq6n)]t"]kO;4FcY&xә5`0xSk9e-ퟓ8k4ٷnqZ  c Bۓdr9}9z=D}~ [u LYfu> ̯ѭ?onAy#̅x0.撾yms9ʽopP1ϫfus?W/o˺UV*O ?%hp6w?O|Ÿq@]o}SԾ \ekdR#-whk:g#ߵwx'>kĘbQucHmD{}н]6bhw7NJ 0ܽQRaUHڜ)mKpAe;eh.Bȩ\t2և~T~`Ӎ^5ju 3+E6?w&+WJtknjoYz&jIAtv{1n*D z@I*N9Ry+T# *ѫ&zivD.N na#%NDРiHR= S WcaL#7ܗ.Ԥ_SR8:GVaph>n߳u32LRQno=yj<ۜ.Z'[GOJţ~\ \!GX>ʑD;z[F- OJqs:<#iչ"R텹:B"U 3Bw@"o;sՄf삭10KPQhZe%$h\ yĭPF{*F(m%>UJPLvzeA-51Br$UcJFoXlB4a?D -JJ24技{KXGH9}` %cyR scQ rI<^x{B@aMhʎXD޵y+Kуp "9OEEDvrRpWWVn8JQVgs!bC05&́;'x7`^]6V~ <|Bq>M0X|GS%ca'rx);x[xNs/W{R49JLGўV>>[98n$qܙu<]?뾿O%8:h+Ż[  Y:~#W5g臡3huj,->qިXNsY{2US[p/wW}j며N}n7eNJh1pBO~0c2(BCt Ճ/& χK޹aq_s>zsR8;_1<xGy#?C l52L~9dCx=`wjVL700\O'6YIYZn-w x/68rGxfQmC8km,a|9ol1xT쪀e$-Ҁc'Ap$ uUDmL!a e2gʡT.XpM$B$p0.{g}3_ }3[plQFN,ު^|Vlc+ެG̨|P]Owe y/pLQARg=X&u-y]kL|lj}Zn[;Tz!} Tgz/&CPYJk jj >H]f8$cǬfFzgƳٗ66G.뻷W/w(v=S8b#GZsYz4痥^=ы w~ҹ;/^ "> oxZ'-v35}>\}vT`V_@LXY9V<8P }>5GCXa{tmlcUwinUk֒O`KYBF\0Z@L9brPIrڴ[:= %u1 h"xRk6Aj@rbS\|, a!$iئy6BMVH6x/M$ ǖ9Qo@%gK PN ID Kg~%a"*gk]Ms>UիG 3:xFeSǭ¬klL`ed8/TLF%58D' "!w,^ibH!g|Ԅ*sQW 8`WU&fֿMcDeSMHm5H*Ae>j'ҠƧ _(ȥטA3s]IIWNԕSҕѕy­ȍI|r [b3Wu5I>6ysn$q\;J· ͩ?LX٤>䁒=+iK!"֔{owݯg-g_\ IKxF˳pMzow2zŕg5StOKK܊ 6`pl7fzcF\(qe^L{gcL&]%;|CǺ$.W@K(*_E?˷FMnT9ehC 9h} )3/".'΀o&4$J%ʘi*h>Kܢ^F3 Án"_^ݞQ::N3Fu=)î)AeZr +sPƬ'D,lL 4ٔ-l_ w3YG+%7ۓZk$XJ 8n5U]qAH|?b}Fn>73oˋk FOkA9WW$/>+\NkyR [NI|w¦)f f,٨O{tKNj-샭1(3_RuDqvoX\7Tq$Q` E) P3ΡI@kJf#R1QNb`ݎ}2)En9.5 ,a>j4VurPS3)D5TsH6f9`٣Gը0V9pꞰ67נa zkϣ@p=*1C1X2`%{f>j*3Q҂RP`vu_+睡4b$Ҍ Urm|3+T/mEnLO' R>7SgO{A4MyH-`Yj[*YcVlo@m,6.p'Al:W4J5ZS%bW˔4Aj*EqLlXmDQf6ELumjn'jn)!&YV>;p}!T*YTʥU9B%؇j4ڸ<1c 3UyV R$Zʾє( a*~i8 8Ԉ(nto?⻺r @6 |S[|K ֌֧J<4"kM*TM ;/y{jHTYaIyh{؆{'7;t38|ڀ#Fg6Oʀ_Eu|ޕ)ZYvkWV!k׀}1Cũkئ%~˜ 7yeOγHdovAu FjH}z`sg^ЁƱNg7ǹa7G}xߤa<4#U>С9}C<"*;N}wއ?>ZOs|#|LwRvޱ]~'ҭ|_ߟC|R ) Jdx1Nō=(n˞G/zk08n$qܙ h|urOLQI>X:u?~ܳKr [9Sp7eEyف[O'.^ǮTWݷ6~0fA~nL j[5;/}^/}x6[[ܱ|q,}gw??JAqukrrO ݐ䏲W-xR#`[ghlU/Hw}{{?ggS OyoOu鿹:{2w&[}]^M~?]6X7d$c˻[ o]Ǿ':| C1YtBWX%x0 {\*<q/~ cۖQEr^7F:9i+V@s֟Ib'U{ >/VLkb:agWfb ȼ6;<#svGnW%>+ [wuUߟmAk/7ԖG7_Tv/>E57o6LkGlJ~es4cW' jy 9FJQ.(cbw~^@#)Kg _-eZbn0R#JjFG|+ q[f eNQaد/?|{!ٞafF |ٻ߸n$MY$^ {lɗik"Kl-C8{uXEV$,eMjD߿}*׈Zswۏ3E_we"L}Zs,۟|nMӸB ~\sf.;<#.Z{q%mIӓ˥~g6=ṱT SXOY}_[n|&́Z.m~'r*1Y(F16+Ui102)r2xɣ)= }#U~22o$RMVaAA4!2ϞlˏeeQ.1*}"v+&`'M*s0\z^oϭu0UՃ}*?uw6?KzP+WEt?Rіw<uQ+ E7uCݑn^v>V8>^Z(GA|}칆oo_Pw=r[N$gKO2IW61K>Qkj"jJ+s+gEe<,q rSdP03Q̰?Sbs5*{x[%GJ!2 ORk3:) 8eoK3@47I5BICD0)&ʪ )2kcmd$e6j r5C[h6{!r!+kc){vjԺ>v@3R,:Onޒd_ŻPjEN[8̀uh4 "9%YH䧘&9(6֛e*h0hnjԲ5~i_[2#F쒄רh;,jnf*VfiSn%WWv뛋9y!65?׊?:q'i3I'G+l~] \M._z5`r ox=:j}\lvy]C A!p q6,p8v/)ӫ6{}O_H#_N Ο?ЗPhC·v _N|S(=!Fn. AО2xRF79J) @rDʟw=XyjJRyjkM4D̚y(ZӚ;h!Eár։XṊs>mb41LJv6pΧ7iO]i|jZDZV6㭺\޵JW$0cP>vw[S݄c{ٹC,oqAoI1PC6ViA)FiœzW4orq=|&:Ȧ>jͻQ+ A }GvHtj޼ptJ6|&:Ħ$g_ޭ]_x@6T53]3j[ͻ ncXWn+6Uz hYkD5~Nslvkw$f!?Xnuv@6T53]ᢻSc*2 >cXn w*.pY`ѩ_6| UwBroSخgBǃ}[);&ꃖ%u d2d솸S/6U26[Q`AD"V ZWJ-8x&-zw0vkhߟmk p?-gS!6fYdj&!ehK%#bQ&L@o\IPbx(_-V|n* !UX) &FQ)F1Ey˩y5f|j$4HVPNJyF<9`p뙲!"$MiTwmm~Y`-US,`fv_l2FIJ39-n%ڎV]7d}U\!x'T]($]ng|zn>] 3}Q7OʻuV[\~\KԄ‡ODE?únu~% 2Y=?;F#?U-}g*)yD<-OGg<6JS-)x%pFCK?V`lxJJ>E<Ԏ?bOt- E+O'"“=%>{Fc˾b"U)X9}ln}yz76vBQ2WZxR{ z,$BVB3aVxtL59ܩZf"3[ \SHeΉT~#r#dwЛ&7W_.23 5EVאw={k&,{ހ{孟t5 6]M핷&^yI@w~DkE˭ R ,঴VQTJ{O!b],Hc٢Behg:1FJ]\OKN/;}xk!c}rw5Q-\̥T;O:Ki۠Pu(Qs!I;Zw76Q4C~,?xFh!Y]K.Y[g `%y4Zj*F*͟_>/9pY鲂oLY.IN`@xR{ANb(iX%*JNƿK]*BFeDv㎆C}c"#x,tzt*NwV.oq60Y,0z猢KF5hLiD7(kƠ'LZg.By6P Q`)A&)aQ؜+R2%A%'^zE<2IZkBi..eV~nX/dEP,4WuV_Qwqo#9\H40E4R#U+WZHFu#<ȓ!#ym0ᶍA)NX0RЮBRxTiUim2tg) @ !Q%2H@7]G7tHz%2 >ϞQ`eRyw9u7-K=FB_<ٙ͢=WV}{֧,7kF| .52ʆNl9aK?Py2EU23zE.P &MPTB%Μ'69q+&7]9ޗqDoG>7] _`e s('6N]6]Mz)c`=zpT$$z85 T8; ge0!Xsy#eU"MWPb2d5Զ`i UΫU߅jIdlR8QNK"ƁS/gxӋk9i]ֶZ&ȣ}\"P/;ku6s7W͉,mphsᱹj6?W7!h_T+(ݲ\1X\I|HU>yӼ ۏ]XMW>$ZNf}:EQ\`7ZOy+xtumMޞ,;7.6J̻zz21HnFGn+oGz>,;7N6%#ލvn]ec:]ߑѼ[.ޭ MaSTq:Nszyh,%F]ߞ~oNtpu$ywyLFVTl>kR͈&">#se xX_^}3cK*3+,j|jϥ+ے'%61:\ &zv~%~_YF+ɔ˭Qh00_;PO#{gMU^pݻ8&ՌZY)S5.WKg,[.5_fv&2͕ZynÇӥp?gMXb+|# o:h]%đYx]%J%MJ',lQT%֝:OaB֎h;=<`_q֓L0MTށa=ƺ+ w!)uI uz_U]qUq(^NNOqS^PqO:vۉ-HvU?~8ZGoAIRtD럿k~">%qM/LƌF'E* fk6*-6Qa|R<ܦU~qW .QQWI&rrG?٣7;vyt=H9ROm\QsģM=+=^/~ I.4zt}S1241:'Г<5K`B1]+kف`u Jځ)C[aƠs[BNi17}2Hœhk]Qh a=pBZy5J# gr! Rc.hu8QSpg7m #֬v2$?Ϟ!G6\5;}"b UN;@@5IK=_LN‡d@..M馸US)ᝥ~_U)QsZKw/ѥOdRJ-Y#.Ob{?yjt=}}>I:]JFiA GX%"%Zw VMcOGEQ~dUf_V>&Z39"jR+YgG LD-+T#< >QpVRy4kXBf:9Qv?,f3F>RӐgI&yPqV ,V 2JT+VTEZՂa0ozer B}P%r]+Ljبk K/1)Y=>'O )%ELk3',7vD} k|c-lLA2PNEy\G] tFSO>I٣2mħ'r 1m&.*D={&+2ŀtw>R$tJzڳh(u ٷˆð+D p~98iP) pJx˦<7D*yC8+KE~B:Rr•RT:PֽetTR!(8 eF܁AfV778Qͥp@SLIѲ Ӫ,y;WC+ȕ$Pʢ Jں 8Ü=LY( K KEU)YG¢̋B(F} +$- P9cȚGiLw s6!ι@ҽ.\|(4.?8FV8XKU"xf]\y%E%k-JO$^Z3bJu\-_^׹g;2T(zv]3&FZ,xrz9ɯ\V{ŗbn:kmunW҅^D!mXRͧ.d D6weJ.{A.^H/ys̿,ʵTx~8JqXaÖo_I~՛^b c~ kv'޳~%Z8mi.cVLK!2Fusl;ρ[ ![ %pfz2- ߥ&9Rv>amTJmrQ9X-PmZPe.>E\+8S{+qTfxnФRudO#k "3bp L(*4с16Sȑ h:n%uIQ4(.yIeYhL玃#>&g\g8b9AdJ}[Z~03CH !U8nb9Z#w ]YX!II)/)uHHQh&祿r 'Ox*{w}.A.pp%zIQ`gzU~JXoaN T F>???B9U5Ļ[}"|/3DF /q?gޝfuQwöx*pk;h|T_iܵfNz.¿ev S?+zendg-ftRXir;`S#ekRi5G۳$i -nr"hQ>yf \@6vDR^<K ~j_wi!aORdԱ(6;z(ѥ@DGWU@PZg\Aky`xqUTvPR"1o !OXeu@DF"\-`J&e$Bҕ(hEDzpP(-FSӼcAFZ $՜K´TshE ..soGJG]!DV2RUrʇD`vU@;f {a\` +|p8U`HjN8犠XW o1{P+=c20 {x 2]LJ ``ӧ]W5_/Ѧ?RH;?RxCE5)g~fEӒ`<R_HMPx)>~n" V VtW ee(t*TԺI;q!0C= {f.rgi/F~4& oezpKDR2o^g#K#ާ\wwER~, J$f&eu$FǰgYWK+/PܝGIT yqB0[{ \pZViUP]VdPorooac{g9%حm1i}\c:u77if7oi^26bt Av2hb} &C3h՚mM-o33k<1Z)GBFh +liP+ K fm4i*vlG\Oz/+VXJk,k7RTGԴCep  L/gЃ}q =bU7w_CFJ@*P)Fli+xZٞbk5"E\ʄ`w~ۢYzKJ٨~Um?@F&ÉD%a6lݣ;EH,H[=*6kiyg+b)mkQ< oֆQ㝮X 4)thtEK+ dAKfǻ#!|˫ C5w},n^EyuݮKئd^kL!z|:[ 2yḴs2A-5H<$֨{H 3SxDszWq58v=tDMB<ǓKP( X}`88Ʈ8ᖰU[L8K׶tR.~(aFn[yz/gvGzطmFږ j%Fel0i[ڡۖKqϓMK#8I).u VI'AӺ^+`p@j jĵ꥽&ܹܿ4;iŗ]E^tޑ:m<\U_n+]};8*V7O=ܨntepVHM!hmU菱z£¹۳Vr;*6ofkZ\\>pY?\_VwIS/.g';]p}Q[K\X˛UC`-v20s*֮rOw9򻻵߿v_i@#:@rcZYb.[4uՓa/Ol,a^{kaZ"fa%+ޥx\Qd56HL.p[3 ΉU28Ir`Ծ7NN% kl Œ))7y thH'"_l{hz],fB-ށ n}hB)&H R.]jRZiKLJ]Jm\%1DO MuuS6ܶgVŦ-K mjݭPꡰ/o ,Im}i`o?\6*bd=gtC>)!ub]9w`Q|[_7m)ntBmg.oo/ݎݕFC=:nK ϡ2dJ`+]!X-3,*Lax)cMc]ZZL`!.qӆ%qûo7⠡6-F e|[8ֿ9CENJvw2iHq}$6Hu:Kcc/*ZܽBYgJZ* D V9Hșl}`Π*WFU &qkR_HmG|YB}yLט'+L 2 ]sɊEqCZth^Bm&>?%c{dZwe0NѮ"eMU387~a_nF!5 1mA;t[ tUJ ۡ-y:mKۭAtx*Vwf$5NTidY %8hOhbVy7K}eHHSVk?_@|5w՝ai1pukʜX#ZŒ} Qh !!$uD3(yJZI%"MMvu#ѮS$1 lf¾?yf(EnsS ׿1F'|G\>o8`W_st{!i k_(~NyUpn 󪺿[ؕX#OrkaA>T]/?!Wqw֎w%"|PJ&\}|u_#fi/'Âw>* 'W~% t<ş'(.y>O&'1a5"ag)a%~[oHeLXֹ"eL\~2%j=4ךLd5$;)sxu.X=,;ׇ 5 i46n\"KѢ93C .n, JdّMPLɔ49Kd-<G c2mPp7dhVaNk'RS3YX }<&=<·1IDw؍\u\E<Zu2E❓9ϜJQ).2kqJu'j8y;' +"z:!gJWKEVL`">AP Qԫ=dV^e׫R)945>ק\5dA h")@\ђ繡 h)-(J':rŋ*5X5H 39HqFdFBa2FRf}@Y^P%R eڮv՛?>e --L+_ot{_S^6N5Gg3c;!%hԽWb I)eWYIJYI;3-n 󹽗kXJIj$C;D${᭽+?a[Z].}nVصZPAqyuMK)8/szi]8TCk3NÒZpv֑vvK;q,-6,dI;q蔤ZyI?P DVʥ4+׌ ߭~t vY %gL'ad~0 Mζ 3v;3hyVUvC.b?Qۀc{ ٖ~[^w`Ήy\Y7QY0JM}IágQRLbauh蕖cyR#$؛-eqU5(<:d"ܤ5WB#Êr&yeoO›"“3ݯ&fӽV46bs~ōg>ߵq|=^yzw^'8jމg!R;fY/ {)wB8G@L}p/;JxcFD1>7 _HxFnm#~PgL_rlGo,R=`p0zORADLߓKN[@-Ҩ/=bèYU>_MDGEN2gvFޙ{~t<)Ꮏ/ݹRxvoͪ%GgoN31Y_j֨ 4(8GkxY<щ8IX#C>(JZqrC 0ʬ@0x!A3"L-,pH1YՍ` ~b pN3qG'68qZM^EmQE%/;BExcxҰMK?<<˞̧6\[0w9񃓓 ؗDwp&PP4cK'LܠPvh-r*Ն經KZ1\Slj\WM$s[ʎ{*ԺvV-نj) &6#K e|qDuSLZZU]1 Qmڽ:q5oPv&%57E"ʭ__s Wvn|l Tv/1mgw,FK-NKF-FꡜGo|mtX4mNeWMxoU%ΰ%s%"ĞрEb!ZbNVW߱߱طʌN1"ė#4KEw;bs0Zg!F7tL*zGO쏷.5LJOn2m#vգh9lG lι[+=5#s9^w x]4D{'D3J+)XyU9Mֵrh-mvoBЧůl;p >h"KǑ+]OZ1*&qk8(i[4 yjf"Oը45EnڗѨ4niP/^*ufkU~j/l:dW]7&g:U51w<۱^̆DdXruDHP8=tB*G>ch\SS3] /<&WNj՚$oI.z94v<\?`h>Rmf8JTʊ^{e}{l Aу־tam t چ18 9'Ȩ$ZyatmE{+Z$V7lj(Zhk@,V3cz6jBřՆl)aRJn)QRFoJrD=01!@9Q&7.YE#3}|KMab2Vis f\\12yM( f(Weyȋ}鉭Pʪڔۄ(@ L {[. z(4-2B2vH ATmkv{HIGɍm$*񸡒: $®P)BI@/4 TOm2Ӓ5`JBn)ѭB _5wFͫBnM!vTݺ z0'vȊ{P5k L%-fm(=%bY8&>ԘcogvϪQVcnnX^="uV6k]"Z1뚰O[ڌ{'ތ.2 \{,:g{_wE4vPu",~cuؔ'J)D_?gL!@eJI()BIk.c5( LN;?kW2(j"? G{kGdvXA fֵirScRȎ47gt_[wG k8w)0adx gm"xESt*yMa}wтF vh,65֋7E bMUYLWwlG d M;,5$IHьE8LDJd{t9kfq,ciZq [y6E֧h?/&krޤbԴ7+89qz ȬV *oc<׮RBŰWOT;!Q2rS邍Pf$S0M ";<<,kf&^\, 88|=+C? Tq8<0_H-en۫ن'^=WZ4 Jl8_IcJXCd[`U.Þx+Vk2?ֳa?Ol1T$+8 s$z6s.hAV)&L0؅]*ISo'1wWԪ/f[ϣ_ۣqpjlÙm]vVwZ.T*N7d.>f+Jl0Wzce!'sTPF:mJ{}Cf鮷WO6 Š 7{~o%朚$؁ *<. E! c,KUB(Ԩ4C Kj6AW/(q91 UJ`@!paD}NAY_V3fR]] M򺉶 kD?͡&ITYzmBhG-8LusFq*i,8z.EЎrj|xC 98p,x^D2Fb$w ԰qOg{XU|n{w4"uod64(!cZ^QO)z>!h3.(Q@9tfL="ßPU1(Jԏsݠ>M:φƀNJEoRgKe9E52MS"U-L|lHmwG""P#$R ^@@} )و)])xqfI>FB1wI\qD%vnT!̩$UYENң́b7l%뫟ɰBHOa RfיNZ~ 9r02G0 h/os7|/8fi;~{wa.\wHf˔dTkCo֩}Tp zf7ܵZE|s)(ܝ߭)4\Q8Qtй(Qj `VZ0]^=vӝa0$+aǞFHMc'F 3dk7~'VpZ>GK ;r)5]R- Ԗ\3]^̦2WE_̵deٱ/>WYccץ]sGb({yvkŏ4_fYpV&gPv)}13;7g%c]")YT,OҐg1:;5zۺ {a"1RoY]E[t:uk@CFTŗǹe((~#ĺ2 ά[Oքq0q?YKlH6* |+݌6\(l$u%q!젤RTl&GQR)!%a%f<8F)I3J+{`3D^y)>츐i?TaQ5 ýlI0>T }v SzS++׉0N8 1F#ԉty΋ 6s#C6r/I*6ha&d*n 4(ǙD 6)q#3JqZ{#)[VOƝݩF1 ív&p@5FXj :MS H]/,MbJ $T&FڀOTD41rC%ia TXI*#( O5F!P,E\@i #j@)fZgfHdkl X„sI Ɋr=*GFBR;@P+Eo ,\Ev-,qi$~ #HH>G EWd @@>ɑ]#1VQ륂*gPBIaX"d SEWgLG j@1Iv/{G&;\z%G8! bMO+0Zz Ϻ @д!?^ ,M}#l -RR ,y=zx]{謈 Ob g {X9d0Oq#zTuX22^}aK;*CĖ>2 R2+!>z.C\>ݹ*2aJvT OQBe]mICQX |*F~#"gU.j;YԤ y*S\~c˺(HWo;پ㴒zA߂t h3W~s˺j |B~#ĺnq Y|eOքW 2DjyMW Bz;YNPĐrgfh6FkG`T,m^&g׃Zg#IOI̬ce87\ N,͑䠕Hmx1#[KIF lw9ˇhlL-ɚfrְ̺ 6.+37Z92|9B\\sŕ9fOsZ\3[\?S͐Eqeetn|l%Og?s礦v ݊4b*˿/N|G=CL5 8K 0Z2 57=^/$/81JA˜/Xj={1rQ9fJ*jmV&a0KH1%ÊɨA㤎h@Gg( ,#lR7` ARXCBB0mX)Hl)1F$@#Kΰ`9Pp̬Zfpm7@( A  lވ^Xu _65z;\=Qx\mk/hd7͵&ul2ÑPos+"TQY= kqJ[A][|>4lRCύ{?g Fa?qK IkU{7x7n]}oK+ܺڗv͆:vᚘ6xW<yO/eþ 1:}ĨtϯtNY[",LDg}9]~)&gvp<$]I9yJT*􀼕Wy„ ESY$F먱U ß1v yr7|Q+za ĵL w(a6l6$76潙Ӊ)v֦bm! л8u0tqs9Jlӏr~'3FR8mU94m@ &-Fi?@-ǧ3@OG>˾kӄ#3 N`b 5 $\|>$ J>7[;v27]Dwd֮bއ^|`Yl]'B7wcnsaR,}.|CME4't$/D|)EPGٓgnBQ́l+GƳqs{V D<y V-7 N0)Pt92:HΨ];VyR_mI-95>-eUF0~zNRuqf Q꺢\"hB$ bP"0zT]`*C̹tahbݡ&h3CУpHI3",ZYd:e,3XYLHŎhp B3$$K-'d+BH*KXjrRI %HB1KOrnHQd-'t%dF-zF-T?ָ'0J"Oc6cu"44Qf4 D\2AsjtX4!G();=IX/p`w^/䷡߼"4 KZBQPDR(엗T ouewz#.cĤ%ַ@ȈcLVQ )v@brZM\ $nS%R. wˋO&,>}o$ iv` (:FXnS72bbq/"}R_mJ դs$#y"<#Ę$2ZS8+X<`<-8ZM~Yt/|aGNzx;_̌Fn?{rֱqo/$8.w6p\#ò}A29bW31 fѤ?]>(W#x}UKJ2;YUemZ*Wa+GxyI),tjݚxxŋhobdu=qF^x} HR='k}OOw~dAۃ>B. j Jfke[HD1XCuIfo>nn$X$-̬滩%=hJUO77߮~z4x;8)^3kL1l \9#h&,K-sbctJ2*9=櫖:gqSg$SXC)t/W̔)'ploVŕo6x  j/ӹ gb|Ywk0> ĐSHxtGavC/THgʚ_QUGwyױqxإ8&/@$%JR#_"@^Q?#NV1Y_ƧY)J*ΧrZ= S-P%\L@!bH|7'Sfn8vVe2F6&]ߚFuY?NjSK\/yK{=9lrw^=8NI|1 O't5{=F2۱jngdhP!_-PjeZ;Mwßεz'ڠ>#f1 wKЎt~>T-D >O`{!}qI-"2b)($ P2DP$E8Mei"BTecLLJL3ZLv)׉O.91plp )#h7>7W {ag)sI.;/Osl>B &>|ùO5|׏ıHa;PJI=D2yOJO,}~~* _4אJ!UwUFARXZf_#UnhQW|+}ƚ[INXQfVC+veC**@)h*G㭎rpbbPZئXB#b/:xcb-fSikӅ8TlʢcȜAj6_qU}Kf9P,! ]y9Eg}я)O 휁sh_pLN:Bؤz ^x}Hi[D69NTErYÆJT7T?њCj#M|Ѓ1ο,lW囼uZ!kVқ% H4H߶lu C&'0+ꈰ 4WX a@IHƙ׳^a%}z=N*+! hn[oI?/g]qhX8ʶG8>]ܦ% & HpjzXp^ af;юL,"S ۳0<{ЂasKyS]jbbCi$MQ!#+&U)1#NaB8BP ꩉԲ)ES)kd*M,4l58@J y D*@bN`ʓbe!i# =^>[A !*Rʞ'ɺ\.a~FOw 1(ߣ/ߔ0xRO?͞L U-gL~$||>{|B?a*%'Ow)`*,v8zv-)L? z3YlXY? 4' 3yL)5&yyC_{衄B´- Ң ܋XԲ`I{`ql\g:JI8u $\!;)ER酔"d'UJ/ZJ1RL|ҥǨSms_XL LklƩ%%_XݮT},svjN0ѥ q gG )b2 L*( "GJ4 /GM\ڼ~n:liNDmSeGVvUuBLKWO h }Su>FJ)8ʊΣ% rzj@*n Fl- J`*n-$L}_($xj ͹LՎ%KSg#L_Q$NTʘi$ADs_$@nB%ۏbݩB +Q%2$qզpk̜A̠^ 37XZ2Db˅lrca=ghF^/k@+rr ,: )>3Uw}>W2sjO.҂~_'WJt+p:jJ0w_J!m6kT:+`n'T[[BЂMPj,V^V䯋[Tk{.x'ڤʍSH֔+~yW|a(p}GI;d+m e7:Uc+}KCb~.ֲ)kmvkmA'%cW/W,fC=$+P$䍋LU|]_i7͕66m:N}jM6f\pxBj"Bj'"xҋR  ꅔb`',/[JO1⅔КKRjK) uba_Ν;<4e+hQ{L1M)HBj:j\&u5F! ↃKTP2}+b,2RnP&e"3ʫ79HvMŶd%dC z oDM{c|`>|J5'qpOR48J[ۢ÷{8QF6c 1bFjn*rw* _\w`@r➷yd|={ݳ>U-|.8K-U/;221o@ +({: ~td." !z<(㐇c*A}E0T!qN(ӹT3tvAY+l\G4J[YCK]SɗYXyYR~4oἈ}[de/hܿ-{7ۯ7E>Ʌfs=HeJ5_e<`f̃1S>bD_~2p2F{ROZQh:W"S /b0Q}_h!`^v &RS͞sV5e7 ~4Yůf-]S]f0r%*RH4c5 BĔ 髂C_Ia3,pWݰ)Q g/YqiLP~snb9\w/.ܔLSV2AmzP<S)FBsuVf({mz^S1 M&giuqS&Y1\wpq%ؽbC[I@NupyipQ#XOj.dp+>vF8>FXs-;Dz'"Lxfjvwt1.i"zK]>0yk[B[Z*'xhIOͺtbqXD#1I6nb1OEB$:iqLhp$ *BٻmehǒȵNgc'9hcUQrf%%>$l:Hbbw]xt8e9~砚w5nEtx{?M.9:n2OmwՊv*{U*6m"xiDB3ۄ2ox*CqPJ6T+sNmw O"[i[QNIıƤm3>RyVUQWylZ/>U& g|TwlHPDj7e~#wo;tm.a[i6V+GCټEOҊm(MR}kS,%o_\A"!R@۔_EG;TsSٌb7T˥[é3?pP]࣠:< C1CGu9>tݩ\&p *T<zs~OT |JDˊHZ%)p}pFz7v1H|~9XNpcZwzH}i0~qNnG5)$ I@xH 8LkzM|iGFIF"av~j/挗|qj8S;:=mû\9{c7a $J}&ѸYF|OXT\H sXP`¶hV[OXQ{nHJ`hύ)R ^9lԎٖOXb|fOp7W]OGp>6.:Z;ۻ6BGy!xf@sAYƤyhBBxaն'ҘnE;r\K+/y%2}fW  z䇟sGU=2;csHtY0p n._kER!Ma3v{=G5tWVܾpjME#>CYrT&,L^h : ݖF`,l_& eb&e?N@$f찤"ڃTDG#!:b˄naՏw!* w6~PLK=Sr0 Qx{B%+T5W‡R|ٷRt\e܎wJIՖ`?R܋P@:n4L݄v֢sɾcI],0,̔"%aV{ K3BU7sˢLV{wmLm+fs`@,un(mRݶeaP,jm xWs5̀ uj`'WV?bq5BeW52ŨTOFhU(M?ז^f=hUn:O)z%/ D= ՠ! Q=9l`ِ=Mܹoɜk L-Kͦ$0_n Lgyg@̳Mm߾Ů+.P! '4fp+= 4AIDHgZ>T5@D:]26+-|©6s]qTɐWbAK6k b[4nX84 _d\4NJR)W-, I)Sut$v@diʴ`#έhK`ęNv8ֽ{C{I P6*j879 [sn]M_ 3PIɤ}WSWnOPi8`QKzt21S-J#yNș@ʕ2DQ0W/$(!]#0'Ff?+gcX= GgPﶃ:9UK4sϏ\HPXv/mۨ9e:?lF(K6,d򂮜+_ ( D*/R8#pTZ<$ sNX_]ώ_ۈwO;+AV-ӴV܍|=lzt}Є˙X1bcb'-GѠ)ǥC4ss] ,y]qQHHpOӚ;K}o7Fqj氣I`m'#mm /5u6`*Q򝻸Ē|yC.,;LY֜pFbF( =dh_POEB}kr*7tp6o[Pz̋pTQ(sCF:PNO]ϧ v)S9~euzsK3fW=?I'.F>z4L(4JԿYi^xij ͤۅa_Xy_bwSNN}(o\G6'B0FP_o7zѕX Nי?:㚏 DM2v-4#J[G$y}{+p">e- B'F*s#|xpjq~jGLw@a14Yq>˫v AMVij[._띭dGIwC',K)3 _V%Ϟii#oQNd.ƭI( dvdPYzDB~ ?nx//WRNJہ)N颧Y֗d+"*,HnR.MN('7')}ng[{5b,KY17otL5qLs׉a F7/+^EIiHl|ZnbHN!#YH$U4FFH78鋭9{rb֒O˯>T&ݸ> 9O#2{#DU]"a, ud.h.^_ySxE#PL=ӈУ۫7;ǼނQ@}<$n)S=>{'=:Q=ߡ+$GXhEB}"P7ro8µ!w00H9DTb6sj*ʙzf9*]"z,Lpף FVmf #vQ<5HoGɌG:qΠpLxH5. L4=&uj'J O“F9oc\3$w` x瓆(c b軱rc6H?< '{vQ.ʧLvx; ٗ]N~ôdbHDa4|"ZVԱ6V|q1b8*O?sL6eq, o!^p{3' ~Y{݃]0 GOL2%;aSf"H+־|LʳzCJs`dPEj^…ߟ;8TLldž KQR"}J:6/VXPY%ҌfHڪ6vxm%$&|IT_2p]e}mueņTf_X/1(nE'%BpIM"ROY1+أ0R_sj@(Ȓz~/_`4۵3bad2+8׵<ĸ^5=:v;Xߞ }`#YEzfS'&|:uLgIwt1eb)PkVLʐ8V&ct O8dMKv@K>3}BfƔ*0aܜE,f n΋jplÌT$c_#.`K<@gň2ǃ ]A8P $T/DZT1c-8M:v4`M=aIFdP`cLVMY~2L@7nF:8E\ V[$D Bˡ-!ll0A$ǦIX1vGn T=O"V#@G^D=ݱZ䔒?2Wk4wDY`ʔ8i>n}Rr@Sۅ2bڃ>}ͿFh>ܜkn|~z{w:Q5׭kzxuKpO_tݨg*ݷ7D)m*7wۡۍ-eG?3 'uQws(3p_@,~ a/7 + rIg DzqZ }>uI[h~w0knU쩷u^2{: st%Q5Gud ul܍+Λw?}R S#* :bwc>Q Uk6/wZnlj5y54I"nL{v7bq,zGokz`Nvdz7Zl U4PqCVn|puqOo;_BͿOoFHO:H:O^ )qҰIU`0n̽_}7{|tn7/IFpVзX  v%NMMo un21 O_g ^J|5wcs:kCo(=A?Wb[kt|m^n(GԹKjrfhJ6;k#z7}JqS!,.4AI8{Zi9Kۗ~씍-blDna8<s|a$FHGuoRUS45+~8(sڈ?G!\M͟pn.~8pb(x (j}Y\Py |8%f<#9xQ簸>g{pl: ÇŚ惕`3% GgPMI $ZK,K-If]Yo#9+_T ]1ٗ y0\벽<},o0%[))eb^Ս>ʕI~ _^I8qZxzkN[A>mv)|IBw ARTjJq+%8%B~涧V򝶑8a՛0N{R W Ss{O)R\߬ y^)zv(Nq?@G7n"{BϏEɋhϜ,t!D}qZ(+^H鄄"3UP(c Qg?~()9׏FC ywsS- INf4K{LJBgJWJPqR,SEA?*KY1*# a lm:XwZb)ά֎XRfh ꈵlsrG?SMVXTJAnY;﫯+92lϘPJk뤵o F``1֓>TsdV-M}P~6&CQڒ?irRffʉVRp89SeL CU]?O oU+kYgjܯnJw P|_iZͦҝRШr!NC ǴaƗ?^;X@!>֢7ȇMfw؎ >G O[:b#hoހr_.0p7-?ӭ+biذ4?Og?^=|^Nۤ||߳z,IZt[Z]l{EBls|B[Tb ݭa,[\-czQDɧ%§٭Ah,- \?|-pnϮ}5}&Zv9¨g^16aNGB& > 7ftf)0>nIKѯf~ &Ϯ99x 韛DݼaOr&7/Vbܲ~5ޭg5-۬p/b#G恳z<~,/|]Ý;yX>,.p Z_qAnߣˮ_~>V^fWgl:~M}н^jߔ]$lE#9=kտ#dωf]* &JWiZ)n3Vs²& ͫ?\-$!D) ( dơ8)|GUIUxh|s^+|$Z6rO ?kpui5ɋ'i4z9:P”/r02Q'l/ cj}TK?>LE# Gݐc^ZjV%y8BqQ)Z%&eT\2tO]ᇷ;Ն/WWvV7FI2[92{v>|gSشiʱ/4cRt#8'DCh8\6AHH7@)6Eq?st\Ʊ,ӐhbUaMr)3 pm$G->s+ЃмeB1pFJV@-t*c L4qRꜣQHb3"Vh\%[S~/~jWH?lԐFZ>4=ZOK^-{'zH\n#܍kܽF?Djv3Of%ߚZ$H=.'`-HIҡ YVA98kbƋIi5続 UJ ("5+ՎU#;*jO@ H7ey~$=8zAљk@$rOn>jEg-Nn/1ĜLTZq0J$"Fj1o Q7:/{:(P=Ŝv@iH:'`%^qԃ hU'w!ڴ(\ VXeWJ8^R y&2WbB {Ԩ-XAV.c/k3DKWP;1v $%6gJ.=HC#d}3K]r:V ͝ͅ&#g19X#}f*iULJ +re0wX2kpW8k9hB3`8klYڲ8n:ס D.Z2CnhC0қy *4j+c+NUޮ / Ady MOl2qHZ=uU"Ss> ̿l>+MhGWlɪ1|2WG.ntP.FczCk|yC/քdi#SlAxtT`@5ӰF5: |KX$4I"@8&]"l4̊M{ʨrP3P'@5gur:a`z#S;R./Zc3S 58SYա&a/Ub zx[=%8iqk꽳nYYO.at%QP0e.A6XM,tLsۣFָͰ{&ҪQuwlQ{ ?mlC$,^*}A.n%#o7 S|Rz_^޸GKe"gR)odae]޺Jک҃.KY=|v(4䍫hNFTMU)XT BT'1mT6m|[U4IS $X7)y #ZT BT'1m0ɔ^2uKOn]hWu08Vu?KsG0|4Qr_!>t(P޼#-)b'AI׿0X/-iCK}w2<ܞ~߯Ճg~ D&Bݒ^\Gv2;? |Z4y-B@%-\)a{YT'j ®HK۹(5PR&"&#G?"I((JSgV2ئKY!,߿<̾rJNJD+#}=RAAn2*f*+,+%T!F+l7sRHD ΋4 }I% R[pfsMOD"PDbΔ' כv ^.)_<N׳D <G[S\+;>Cޛ\Z =(gR e[a>&rx7"C1FhVptoq _`te$"wIUI@ 0砚.gJyrea 2cYRU VKy\xliY\Ugit'A`(5i6m529G``R,Ӟ+@ {0z}p\+$rƸR2+Dn2BWϨQܨWFhnbs (,r,ʒl83D AUT cҕ!T2sLhA7{f{61Y!3{hcu\2{li5gޏ5GmJ@oN&>LziJ h >MN$gcOQR(7IVԊ6m$Z1yjES+ZlkFu g]/v!1xzjpTn= )lOlnl矛fA GO [/- u 3{L`Ywl0#dÂȭMfFOBkAh'pjU @D;f\wÍv0>Nv$jm'Նb/jqsAµXpP7W@^L@V+aA*#XshbX fXQtH߾6% C.Irex ࡴXwZ ~J{APNN9xߡ[[IBbؒRv&/Fm}ߤB>P PU4Ev1n%~#r4od !S/C !;А7I:qu&aR1QoXO$Fz->!кu!o\Etꔙ ˎYBϱ  ˛#WlD>ۑa{Ⱥ:(9H]5vE"y*V$1DrZDK3*i笋Jwu$$`w,ock@ Y@w?9| b=wmW#^܅g8S`e2grk%AgF=xXc# 2;/s,t^$ftĂE &M$6Ƙ#A(v 4|Fg{k iGdedl@u4vOA#Cg] fpnBt a_R`'!ju3WI5^>[$慬8ړR3! erTIG~C%E?}=8l37R#Ce(1R.Y AWd;ĪLTE&@Zɐ[&BK2[\C ђFYYX' nx, ^hsEVR‚P+SiLwFH#6jkb (rEI:VТc_nt9/isg cV U)xg9Q~F3`dym#s),7t=~aJWdyMQsr7N4 #˫&5SljNF$G+[O| Yޓvxt[I۬"压#G  uX ][sfåwz) =兞I% $DJh{vxC3?G:J#a34FE~) S:4=EhGm&3Ar88  ۩VtEAF#-:h>uF;:lRlFGahΩݜHPaZF#A iJ>L-NJꦻq4e'G5ħFaHîo"1$>a䮲9<ù<=lOX2O%_>jPYl3܄Gmh Ny97?Jr&-qGGH <1mc^ F#t!o\EStJ 6uǺi M/ VoD.y[zT(q_U4I k݀OwK DuRc;VypV֭ y*zNBSgζq :6BJ*譩gpf^x4ͺo`깔svM T?ǕV<ZC#dMi$Ț+8YӞi,;(_I! Ja=AƙibkT0籽x^L܁PUQ2HqD(•Ne1A8(CMhKB񟆻@ynVgj{6_CɎ4"u✳a;Yb#h6֚"iIJCqHIXE2Z꧊]zV9,G#%av̠J;H_oX^8VRB#jq ki6S-ufy3{?LY9j|sZZٛq{6ө=?}#T©h!KWAA>MjBO‚P_$ZLibEM8Ѫƣfa{J&'}l;oÌbIZaa5Rk%OVzVvطFVzԃ aqңR83A­R8RSNVzVEVK +"JS,DU'+=>+}tηft8+_AYjNq[iXi-߮9JzP1NVzVJU4z*c9Rf2fLqk/OAYjFMq[)'aVIO"Z,Kٟri5~YeY9IfFʰskywU)>J):Q^;ZѬVUJibj_uIM56N'1k41՚1j$&L.$Q7DQg* A(I_<Y6Q(c $ RJmSPGN{WY|,{bY<9Ѣm߰LՂwSglF(El !(B,P\(tuTiőFT81NDI'0Z!L|S qQD"6 t7c7jl,Ht :BAx.a0hhI 8qA+-]DIhxlbuP ‡a<%Sw>n[;?X Kh=M7^ߕbrO@3P6qD8A 9,2nP,ŔcṲ v m%䎲E 9Hf[ 6nE2BBK,ų "08;GHHERtBcds38!@$TJ^KɽH*NB88 SF%ZJIELqǰ\aRr?x8"3#f'u?#T1U\G>5挪U*NGAo#]xvHT {7LjoD@H$g.3b_a2eN!zpGJy|Ghh\.i~"FB[{ḙ*p?UlKh=֬CʻaVHkBlDr8n!ﶻn-wNU pذ*' j'؊ڢCPgV7*O@yhtۻ`ܬMyV(ET+2'TpZ `5)I:+ݶ+̝(W5׵DчBvB ǴOe~a; }lFeRۄZ#A_Wqe[[K-i7$;Yؖ N$^jɹeA`{TJ/感q2@JhRSOcRV(mMFJ-z Dĵ-r/HK!R8$%N0H["e,!8V:\ ,h#!j]t.T߮GRsk?[9oUI/(g#FޏPh;=aJ o{DCd!QQZJ43YOyhEd|o$"d1eq| XH1]&㨫(!LX=/U­Pa-PF, =?NCx m+Sj^ l;20ϵ[a V-akEJo;a[mz%59T  *І:e_Wj ?+<5\eytl}^aSg4cff4ˠó췷̃eW٥vױzXIPP_TrY9%XO |_ri<0ϟ@2.:{dn0{^"ŋv_ϯ&^\kt]b/|z>W߿)|G+]J^?oSwg_ r ݽtdWyoPw%p#?}'f4si_W7Qz)v:wi0ֿ>MLg&+e^#]DaۭT͊PD37d4\j0fCf]ioo&isϦI?( ڎ&.>LLⵯ树܄ӿ ~5'sW OeɌF>us+Xn/MJ\ZEUy,˼]:cg}ڠo!D!0)aI/ogl"1*>YK"u7\9*\oǭ#%9A1KH0d<9o޼no-> )! 90a61,Z wP4&1q0IB_ J"bfQi!:U9G?`LuAz)^~7 |qw,@ 1$:6;eaV5m S…kG XaX!"#TCID<,6~P]Ͽ(ͪ;MB%ճ~. ,4K~@?.?K Wp."^} SX yrmoB_w.-{LSnDf!l6ShRORB# q5?P#J!R޸Gxi>5Zr3 <5YP34߾K{%46˭7x-YS ^ZX䡃/x+?]:߹r8=*RM1W5htnCVB jCuk9HCQ~$ݰ^ BCvFA?SCԐ> ٪aͿԁ\Հ|D $bZɎn}  3PvC*B0{ ' f=_MsdNauR[:/F7 #>;d=sjF7в8djio{z15J7 !گroXf {Nr9|N'έ)PvV)2]6>v{4+`"m{_̠)\cWIpDA4V&9SC:%X'"4,QI9^@jal[mRK~^HV(TũTiqYP`HUDX*U 1WV #R*f%5 V It=,Č"ZxMc&~s ^$1`[%07!S,.Q,PQ80m`#0lج~lHxdaP4p4&Oi* NvVMm6,GKmWWP!IAIk0dҵ-8ɖaZL'b# {2R|엷z7g Vyp4 Ǽt=3|{%^bimX5iH5)1!i!oZ2gm6VIJeɮ~fEB[ +u\-qv`]5t#e]Pux߮l:@SdhqOך$;8EsD0!VG0XeiEJǹr% ]HF-\-B D<)WU"}N*+~g]R1d;zT@A,6+ٞpUZ!;<f7h,ggwFlղ:U,_a8Yͤe  \~0A0B%>\\NI"p"AD˻XKIl{,ҫYr,&OYn! fWM}rRdpڲWfRߏ\sbpS&i:kPm 4w5#AY%OiJ7iNRSQvlԔ8#K8p`ǸB-}AAzTS8ͬl(i`(ve8~VSB{K"=]~:;|Y_6Z )t1U\;QۭVշ~շ[?V{2S@ {} BnÔ!IGzG,bBjQFk6^4Uo̶ffbF$Vw嚺(H@e. ;S Fo̅)B6^7>Nkʟ_N?-U=nEwcG38ř ǽ>JŧbD#`R ZWʖLdtZҵ$f6"7?8jgnƄB-6U)2xSV)X^lRDŽv Me-.co؂+<~CGT9B{QA FyɺM$3f=["\̮JJ(*tJ5WQyBmF&Z WL?2ԁgs !3WL/հ%Y8,`v&ſx{JP _}m 9m9{OmW-\p;JQg f3fЙCDf$44 @iI&tI,P"i)Q:Nܿha6f'/snd}1KUIiѸ}h>ȡKײޓrlgK6t0 ZeILr22ʒ Or.$D Q`RnQ6J.: z؁n#KwhmFHؙ|Oe8Upj52ZKIF$\..HJ,KTAR&Ũم|hTG) F&N(@J]{%,HNyɐV"STvBVj9<7W|Wj:4j# KsҊƜջե%r<,ͤiy5P N2ҔeQj 3%X YaTN\4*:RqM{-TQ.:eYg"IIDzBEr՟QIF,cʴ$V}-ݎszvjFr՜0:C*%{.T2WøBGڛ2kx|`÷J{w>>@Q3?xn̾Rr TCݖbJC Q!@0UFznJ9 :j۱A̷{r$Xroֹ77 L^35uVH(h %Z*޷F33$E0~ՕŘ1j?^Ǩr!7~ #TR 7UA+ d̊CțG?~iki97!_^ݻh<@Ro0.V#$;ˊv[@i0S4;Ygcs |%l=FIdm@>#Ļ  8X9;=[!)'ލQ:nŜ֦wӻ1J)" WQNS^9~=ru$|~/>_Յ~<}zsus׫G_;x\}1xQw>r~sʼn讛Y_ie-f?lD]'0]NOօ-bq'vqe[lKU/ +רiqsK>5݊#tMz]ps} (g%^d:ǗϓS0پfMJ0[Wqz_L7JցPS8Tc@.Q.^uSƔ*b)ɔ1IA8c% .Mm i Z^]AJq|WG=|\@%0q^ mD{b pE IN5i0QyJeճݗ7:`zCMFr.06E @ۨdRe&ҀVJa6"jo6Gz7 l|w 7\5 wJvG;/QQ -ZvLzkO{R/%.&_~a5a~wG~De}~/[ L]$$釢p?ń&k9V{z ;D1`'s)B㾞N%T Z\[ ^o)]>wom<>?~\f 4hh$41d73IC7-i$ToI&jM%-iq ?sɦ\-(J94DYxޔv̢*\+ BRƱ/~ZrVcaag(=G X!o\fRl0ʾ(azcM 2j#S+ -zhxqg?Ӳ*xA34f T ?J~ĩ}HjPeҍs6&gݺ;q<%TXJrLe췴#\f8f5.ӊ]LVh|fCwX E*sed*% șu8_\!4o)S"yN RJ13i* 4"OTgjPgY%L( 0vNUo1*TS3RUR$OeR4\ 3(H&3.Rifrn2TK˜"yB =- O-P;3-$'T*z=0p0h^V(Q%thnu *_.lzOSr& ʉa-z=Z =Z(R*T3iux4C\ j:OPER rbrIsV QHHxa E[)q]bu-{?HceI\7ԫPa}78D*+b/ֿT,k]]Wvt\YT)iu{bL+QO+\f}NPW7g Һ:r?5T ,~Lp@*z_|r EOܣ-&BoVoMfղ` 8@z,Rn.b([!j-v9Nzj.VTn=vk.ow1=wPl1owᗣ> C/2t(M{ٵcRJP]~}"uZT@D_מi3J W"z Է{Rk$5JP8PRj&e]z(TX{D> vWj*؅QʴJvf@GcRJ̀]f3F)J+NH}+5 .(=kz%99bSQ  .=oC)3>JQU ".(=G23>oz2J_J횳_Pz(\Y{ "KI}+.(=c_fy 81=vWj@Q(f c@) ?R#7JC)-'(PJJKK(RRJvR9{8#z{Ԝˌ(~u.zbr &VfKfq{mպ{YɃ0Z*U1PJ(͵z$79_1ERl'Áf_t]IZ.{أmLKHac_Zŧ"+ &`  O`LP݆5txr~ޜW7[?O?JԹ9Bݣx\<2 m;U *9<\)"K*APu噮J PʘC8bSG,yw19~FI{F XvvDC3&z|8g(BU[{Vv;XuXs^6m4ibzjgZ77'RQ7N[QF#2a \[k谌&oTڹȿZbmغ`PMLӢm@Ep/' U`|b|:=[{Cmu#Xax8CҺS4q}eбtKlc=nw?uqJ\?ZxUmb{*&<1Iou槫j ԵW :mdj. X2TjJ T`B6öo\2@]6R YQ rC(xUrsڄilXtZ{:fkp-G7zC]irisit 2[ u]vDe]W&Ave6iX0-duPwGvb$NKV5Ym}IJ0Y5ml=R#^1gn"Bɭ  8hU^և}+k?nzrDu+/&vw_헯%ڶ/ξ2&>p{~2ij.Rj/?Mg:شh*5wM@|Tl<6vZ@XrΠ7ao7dFfo;GX:_2uu$V @Y\mm hISL[Fi|HDD8w`iTsuyl `6]E0\{ {g6YC0AC}/&kۣΡZ~O|y79ArX_/,{3k(\S/,$RW-|eˀo윷x7zxΥڂu|AcR;{w>~n}|X1[iieF;k2q2=|&>E2Ҏl}l,8A[ZQ/ &T<*+O.PI1qFn  ٯ&E߯#on>-8^jc |Hc KB5nIZJa;Fd1"HX-z/Ka/ F L3 (3B[ubL]ܣL[Vađ:5tyáΨ3I6$o6o 6r9o*=@|iEL+U ,2FYlC2i/h6* 8]]$g'.ҀHdZ57ivI8jΫcXd7r/iTMWrt>; > Ae'3xŝy7\/YnK=$6r4`HwBԇb(^J#H>FgW݁AC5Ui=>p5?{?N*:O?hn4զ.Y8X38ňRGB֤G)Dm{a0+cJ!Bb۹0*^)\Lqc@im] U$xg պk`^#P@ WIMe,sJt ӮpTQj+q4r3J, ҩQ)Cߥr׳T)) _@)c ̈K.NSy ד8W(FYv%NAZHou~%A*>gnA }0v+s"OEFj,*=m9.RN (XLg^T7y=$oх%8Qeesw1b9[X Y&m4kuNc7陧-Iϗ7y#INvK8BUZ+C±~ȸ bǠ#ŪjQ;چgve\ WLv/ '^zА`#1lU50rWRvNH0Y14.y+@gVAh0k4^#P -Z͔Iڃ3aMRDNmZZ+pjdb(Zx'U2 }]b{9,(4EL[sc*7W6 Qڣ幨;"nDa2*dYv(D:$q$-ZqR$:TiBee$ !{K"6 tKjCxPN"8{mXm{G+{)@S:so۽Bg7!IbLxÉ= OcpGj)l |Lk9ՙAj1j>j׎_K/vEBozg{HT'-pܗwW~7rQvMYfV 6}DX!̖>V{g"'+\;l^ga1F*Ө8l1ROͧI^t`򕾄nҊ:vMңrOkOW w+ _1A{!5|a(H~"ڭ&$ e_( ]J7VYs$TR8MGU1GKN$V_ӞԮsU.@Mp䬹tv"|'^_KhR6tpWz݄pc6;c2$; 0wh_^_śv5C_duv(j7^x`* [ێ*h#Y7A6Չ6׺z7PSwA Fw[.:H:^ڻ HncX7A6%8@A6."T`66AwB޸ؔɴ)mee {sL9o8n/0}n 8i;tn>=y^֟ve客F F:9VBـH4Ub:{T5RJ:gTCWeYrT)V˒dAhbBlybљo'.'GtM-~22>]Ei뜧ksM|j>;9yŨѬ>,KA*O[)òT[C| ":ݟiF89VzuaoSnyҰ8p5ao"LMBhe 9f7` 9eKttnaSSMj`@G NQ JQ'$12I*rOz)a>i݋eC}IBǪIE`Tu0wWŠD(*hjĈ-jtˤE=ݟ~uL.T KGojx~ضvL19 W *4R%MG 55Y:0:bZKc 0@S!Եlq<~[eswt+LeC^ rHXB |ToBشDZ'ܶo?P(3Y?̴vƬ[:\mL^Hck6H)Npn," Gce1Ksh#<{EjBcxMSRR@ch[x4\eg@{Ry= A~o>Y"]M5fFM~UsPN Ul}W1 ,`-@2͵mzey%C g֔8~v&8~0se[O~- S-堜̷ܯBxxWTӳa1FJEh%u+sBSQiqTTJt5œ]0nձdbRՓ}Փeװ܎_A2 -w 1l(UR#Nlԃ [`D^{/ \M"o:EV1R`0n3=u|s1=E-IO090B޸N֕z߻NrkA6\nez1,䍛 B5n݆bc:CQݖEh'M݆dz1,䍛Meމ`sSfRмJAsp]gښ_Qes"@qqN|I*9yڤR$H/eI&[۠$.ŋ&/4n4A<*\(<)<**Y"I@( 䐰+:^18|WS6-S:gnC"u齭Cx2rЏM.-¬հM%51y~Q?9j_xu]RzB>LCdQ'C*ЦQX7j54*xf8eIF$V%-sd&&lD dNPX8E;cWg*#N-poRC܀ 솱/͟pCIlp=lU!r< [)Z󱿤!e?qܖGBl ;R~$gqd+EVN M:K7;-dZ;ϧ bYL̲ne{VT®Ipjt.o>1 `:4Cc%4Iћ%ELvM" 2IDt:4ID*,if9OUCSc2re6ɇݟy"Yhν6",-OeȊ39vP{-pN>[ OUGO>߈ jg@ˢ>uE\a NiZ<F>$wy##0Cڱ܎cl$-DvDW!M<FȧN!H&*͘gp F)d;CД'4"pb%BOސvGjxrٍ'^Rq YH,Q)/F@tBB(hi$]02%cu VV@hSFn R[ Ug-ŚOH;F,ik4c $i p#Sku"5Nd*#k(U5@ʉ,: \>Ň Pl]AA WFnNk=VtS*!1I,LBްNg.TG0Ǥ-Km^a^*c?Է;R[ /KeaB1 /ٹ |s^TMH(ϏI(5(gb)M·pɜJ#cAACt@[P%10OA ( ȝsna`K 8̭ [)9xqoRe^ Rdo^zLے@K/Kü0o,ӝKI}[zqK 9ooaD/4;o-K/KE`XKևX*WRS0be{)Wa^ʕ ^x)Wa^K h]Ұy/4,b%Tv81l/+rW_~x \s91x%z)0/E=1AKiKdZZZ_,G\"J5*cYgYvpcn'cV82H="KJC YTuǚ/}1J'3Qisz[7펴5~w_tY(u/" H⥖ # B?7Q;,Cp^*TD 54ap,f9MM|d -+I\hzZzM?c|1Avms8g %T:au9 8>_gFn2I6I2SJh!K,1R[~~~/$P֍ SްY x@)0w삌s3.PH\$,&`A95C0í":$!,qU,`%oúֵ[Iփ@kh+eDwΚby*Ӌh~v[L.|+:Ks׭iH+M08R 0VN[(Q*+5H$,tiB5!?3l)-ڪQ^wPJԤ F2TĭUBcqʌ%fKJ%Vqȕv17tR_ɺ B4.(*~kS78Rid^F2[|-]$ SM"#22ib]iEH2Ő"<"v5񌫌敌}vLmuh&ȡx[{Cn"ܦ}xOEs&\+`vI4Q#MJ)3aiLS#-}0O)_~Ϟyus89]"%/˪$ҟJ߬ F(#^+n^£o9H'Pc&JO狕9}he72"114WY4_mIڢ]~pnFJ߽<<лh Y~p_*I$U}εvS.R 8䅸9Pٛ7~j$te~^Ir-!jl~yKc̳'FPBn5~-iSV[J5\1ћѦf'Rttu!_ӇW+=m+5h^?.mYjчARN֚FB꠭~@"U֢eS񅱲[? \|$Neu 7kgQ/e>fzb&살 <?\_뗣6;QRyTɌ |lu7* ;8ÑZ؉Y:*#9}լ_}UmNT'VaԊ^"C>F):(%YK Ew$!`ӝ[qTR bR?tª>TVBq)N3c4Hճ]q|>fkX}s0*NRK\|iI:w{|%KÇU5uF)ףu?cCw_. \5!) o_w-1A*|yi͠Vc505!)zqF7U9ctשU9[l3տݚEK>9+Pq4Mo^1ސ7aو~[ Rt;be e.'p/=.mYj@/h/&Kp酗,ā½4•i{a^V @,v^v?c)9qoRK,"Tz9HI ,vu'E" ZH&%IR{ u etT}eI'1i+*E %'nB6$G RR0%Tqf"J%-K 9-:i]$TscV:? E Q@ Sx(hCXEꓒkF^R䒐!}1sQ$tDC@RVR dMвR$ϦiRh2߻8M&*0k%7IWH|KCC5 ruv{ EM`q;HY֭뵌R7JC6Ggre*AV''A2m!7߼c"DR1Q>{IT.d(I"jUB3MpO =S`C$P?OHV_=ӖLV`uxu\{_RBEB(%7BhQQi .)Z! =x5RMGxkHN!D$\b`8A :$E*èMuLLW+mCisEOa/|5 dPa3|$.l#-;uTkH6C9Fb>t-py#vۑ=XRjÔR WFY0^Xx / { !MYaFo7%ѣg;]aߘtSZO~ּyUf)-Y-gg2+5[ Zk.ѿhk'1F=Yt~Ae(;~c]%[ep7Aަim-'8|%%28|KoY^>*Y'o:dj&1$Qg"C# 1[nYm .Ԉ㿊͖hZaͶWߦWhqPYmw$J픩[2mYLyEmm=Κ7s&-F*kVR \1d2I,I-KkmH/;pQZd!= H.90e aX=Jk9O % C 8[5]U]ȗIJ:΂,Ed2 jB,bS^_[zndB֋*Ȩj{u[>H֏#Ax;(q4l$͔FHt&KdP,yW/K$3`ɘ[@;nd;[{0 毵^r~iZ-#B mYhbVzFm}hyn8pyvUFweh;^ 3T.Q[_0\@Nǧ/o`GMB~zT}WL 74LL twսx01H{ AzzZQ v9bժH2:(͡Re->%{4ܜSAx!1gLCJ+oJ|K,D6Chmk|\Ǟ;e@ wkOnFzSbۄ$m2M2͂ e`b'6D1QGU`,Y`RRO x$,1gf;x[uo/$W{G#IC \04ZX˔yadz|Hp2SD呼"<5P2vv b6<>\?Ql<7ǷRD ɈnA) A(W)5 H`Vx$J߃xzfӊDf#Glil.3x#`AH@·ǗLQTgP;5eY #4=߳F=>x|mv<^Tls=']훒K)@!-@c$3h3GHr r, )3 e/@jgb2FRQ\>:׬Ro☃q\z!3=pN8aynּn?h '${ѐ3r#zp_jl:N$p_%WmrU.j&eC{`S Lgqy{}JTG\;_~q?K%t4n^y8 ~CQtܗH^i4|rv)O'Zo3y}G gS#㏘в9]o $woRyɃUJ?˹weq/Gvغh+ww޽19}n`Ss}l9={\4+8lWtyR ??Wݍd)r0zxs:*昘%ʳmߟ23G%;sq@ t&'+= O Qj􋩳{tl&ݕ a3~R ;[RhRBHLLLJ4%Fhl9esDV /5`*sdo[g;z# a!Uyw&MT!91-罓Ɍ y2(ɞQrht2dIBLi;v&= fWGݶ`2F AH<)R^]h<Rz9U*JV0N!ҫr@hYm40>]:I􇻧V? o!Vg\/P$Ǻ֮Ei]K)kJ:N|/ gVZ ^=&6ق'h1"7S4dJ-t,yݣOXa\Ct]u0QY8Yzf|o.qF܂a~\&W<1Q$o G ؒǫV3sO.4@ ]|_5$ȭMuioW# _,sSa X*C el-ijZ(sPwCF d )Y[BвdkBl.=`z-eի}U-5(/?#$tfbӀRU9@zUի,}s}s__~!ȔP0!{ a#sdEq {R:Yl-_iN.5J_ Q%R]!+ޒTSiS+T?"r~oN$"IE$MI^=-/o UAZmN`Le ѣ˸ R[%fr_ImǴ+%=P+ C_l' ZhI]X~HxYZC 9.g`Pk 4MiDJE%FSMJjuK] RX͵T{ޓAyB>חȺ:ύ>4f` MhdWǝ>@t - v@aϠE2gA+GSxjDr h$_I 穅 2k|m!1izcH^4Bi1I^*IY- WG- oWO%CR(OjTQV=€H>5kս7߮ W.~mʋ,oo/޸~;$[yj!%;\9s\inYtW@#>:(`1ɽAWxȥl(y~;Qa$휜IͥN~sq^rC ~,&̜xїd'-өN(N!bqu|EFo\_"HnzDHby"ES, GH"s6##ƘH'A0%uJE⦴ cѵ/T|@4?rqҼYS9$ȬL"RH1ts(w)!vX(=%H˜AUNRld,(5!Dã@0 q㾌V#=JNa`/($fZrF%aT&Jy瓵+LhxVr27]%zWf Dm5]0T LoWT#oDaۈրL5eDfcN+`xfi;^> r'SeO&MU#:ܠ}ą=\8i'N]:Y|EYx" LC+ܐ92Rc$1 c&RcyB:/NZ)Aq:,Am@rX^Av &CH؋$F޸P"Xz]ҁ[ʀ/kf S&c)U [{&2 OWn( mVz}{{t|DžRv*Y j4c2N/>}H(J+-<Eu)"Z“h5+G>+-6e6쐝>BUWOfc;^&(Fo'n17NMk4BI2qUvpQPL1c>0"1"1"1~LVJ UHK-Cɍ TKyTS-O-';1҄~́/֏9{˷$ mQ^s|V7@n"ig ^EJs˭3tEh&DcDO ; &?DpRYhǤ2rzJ}oO"1G"IB2_Y 2NH($/JY*ZCH *Rg< %api**"]Ytn%pרze|ԻFk*A XCT4"mE3B|,)8ܮGi; jl^]+ᨨP-eX*Kz\!q|%th>_oj$(ۂl 鱅4X粆y(kLX3<\HB }@}{Xnh瑃94dzv,`Vgo!"%t/o]K뻙 sH4 G1U:QR:JuDì^?"+0'|B7r=Gnv4`KR_o0D O!ocm6 Bꖗ-2 ,,`UΣ)Vr2yj/AM@8 <}\j0ǺZS@F*e֓6A<Ӽg˭'wb_Vf->W5y#kYJBP z|^|XQ\)imXU熓ԌTgP9~:O ZՎ:yqM~~Y}N_S5ʦշ1kr wi=R] 'x;r |-~gq`C=0B|c\'bj@8m=Tkmy~$`Lq,JJ7C(p`:A]cV?\ #K"  x CDWLY)^z89ѼX0@吣 xYת;6Vdp|6P5LJiyX! ڻ?Hѭ9ҬE. CݭNF 3/x>v7oL"Ahau[fdF@V`*Lg+7Gz;&eC,td]+y) Zr]OE{s{ ˒#qN%^ےuNv 93u$#GR_˷6u#JN:-:-cXvGJk:wO%Se`j3Dŧ8Ԓ ˲Zcz }f-Z-߯災s>k&ث=mYw_LZ;5%%]mE~n{;^yz rRiy+q(;iTL,wԭ?q閞mh2o5n1t+Њ*/Ja|sQhw~*fUǣRo5ܲ]M[ժi|e &1lfw H#^Ҭ??>K)TVTæo! /YsxrLJ7)J)1D1*z,}D@)V [9tM8>tC \ط9=Zpunԇ\5F ezơ6y|w6_})tSGTA2(]N0AW\R~i Q0xzJIwzOLc1~ |9wUtqi}[QQѺP #õg S]wұ(o_oڥS?nVXYywt JPVRXW (H(jr?gjGdj]6~np?Kve}ԋ%ffGd!R@*H-0</p ΍3;4+]FBȮuǾLpw0AL}d1p)Uh;p:\R:\;ẁ78>unpFqϯb_@-b].p(_Q+ӖU3PbQ,&}V",ӐY@8b16z&SIB3QUدZ5:kB IfЄ3EcGE> fiׅQq\oJ>![I?;()KQ$AA9%@ُxZbZU?]>JV'-i+阼\Uۗwui UQ+4b\nĝ:p:QrK߹I9Wҍ~O[]pQPfڅٙe=n ^3{rl¤:ԩ5# :uMk|>4q BZaFxzjK'[ifvu;qP$TDv0z̟keƔe69֤Vq*P╌ S󞇽1 [[g5uF+g|9ijv_]*~d^k{ /ւ#_|H0%;EJ//YCŁS>$.HM%eU(CH)۩>>yiMJf9 9hy* PፉƁwmb~~+I'ʂj.PE'JƠr`fV ѻHL2وθ61:)t#n]^]h})v\.s8(\Cs8{횸_xuqT7UuzCP-4)iLjwc˄9;Z'բV1:opTOuo1PEZiQ^3 r5pN^֌/^Z^3a[ (=f-)b+xi Z]װJ!7=gRYى{3kN'_i&F[ݪ-66Mݙ"jSFR?ֿEr_ؘ#9z$<6r9sDu RYe4cֈ=BC' WbINPmwp2Z'պ2DkMh`zjOH iJ?`_v ` nu luQFn4QV)ށu[/psR#1gETܴpoG -Cj@d\ XWk,I mϒSHŵ"WVD$%";K$)Pn: ܗ2JY B.f+NsFKFq HEBӌmZot34WG7_/2 w9T>i'x\|ױ/ CԀVe Ɩڌ7t")&r#jQD6IH2h3Oyxc7UGM(xW1&᪜b6bEz z JY{zc%6#| H[PC qY&2ObtTit6-n&~%>04Y$#NSǸs[>ЗRBZPZze8A;c%0{Ƽjݗ>XʭrS(S'4E5k"_R? Li&~A`pv>[HB8Uw= kiH).Э)P)-L}421([Ƨ+ C"?qwX$;)0R Dn) Pʡچ_ !Gq??lw 8gR* bh!QiHKB)G9 6+Avb8_,/~0*?c$S#h@ (s:9g`b0S L8.߳ is[.|6^ nE;rK; UA_Vvd2QftguY(."O5_” d$FMwD-)%; tV(e fSnI5۶gFsM &SaQ2[(DW|͡U5ƛ҆w|̢*Ẍs.TF|hFâe qt+UpܨD"Nf绯ehYiCHf?6|L{p$Lu7HmrÏ}|+] وg]lg7v#Bt<5JG]sKK o./|$JAv̾PW!カSq0@tBב@"*\̹KpZ+0LDriMx`63n!gne,%ѷ& j pWr?W_g>C/@&9V\Q'XՌB^6Oz9Bw_`jLO%`GL*Gd1S5ěI  8?44q@Y;w,^ OJyz(havF#t)D@҈ wHC[ijhοFxsQ ZFUT$\N%nɕ_eIh8Њ>V}q>١ ")'nxCYƘ!{SCsɹ}F_Y~6ݗ>J#IVroڌ?_lg\6+U!Z#zH м2T5 p=pV궰|Ϊ !l43iIrpDeU86 KAz]3t_!homjvZ!Ju%# k$\5-b ]ũ@Zj+kn8ŗmzU%RخNn\f%! TQ x(HC?X6g^fzPxGut5=-(s]M GBJT<31;.VxpKwGX ۝crAGSzE/uyxmIBe𨀝rXLr+yTS c`ezwwU.f%pP-y-byF)+ hM=|Uٗc#$"IK #o?Ny% /ɘ냪NGhJz\x(wyk >pKߕNI9Yِ stX(Ŕ`xut]x͇LǫM/3f1G|?h|tgu,P+Nd"]R)\DaS՛\!;HQ'we=\{5\xAvyu!D@M4JRbQl2监v%K\1o#"RCl-۶jمöxئ;^/PNG"ZwRnj,OQJ5tb\ߢfu++O1S0Z&Hcjy!Jd'8ײh7Ic\:lH*{brQ3׍&d-AS{c~.єvdojaF\#pYP3X錊NE)V[9SQXCl+UXϋ"L(2҅]7鸄vlr^gfԋfрT;ry¢ 91^9+zD~Yu}|Z  'ouC}ݺOI]M/s{s 8݃^G3 /޽osJbϴ y@&K,/Ⱦ O{{p=9+8h* .2UTd:|pߴktzC_췪|2J|6A 4xR(P-CP︑z(T8]XF;txNfѼf杤ktIH_+zoUMAQQ_/g1Uk(/Fq{CO߶BX+H VAh2F@*g.BU)!yo؝3iSJ:T7`N #r*gՅdB >2^׻ycJL W3 x qJ<;&l-]MPtNF Z<3\&rwj4:q/U47vx~UthMEvAo0S.%DQO(5JDG4?9}B=zTkʨ<>r*eM99P/$yP߸J~ھSe:Y,CCJKd l%30"&'똯 ~(HV6۳/;?dݱg^^¹G g}x{sӖnϻş qsi;F'Ҳ֥ UfG?LrC/m@?aL9. }+t1UVP7+iZ׉Uv|6D2 ӫ%뚨aZ{cB_wawu{rc}"4f>nRsogz*Ϛ)jneIT][JYޞ5:)5RЯT˫i` B)TՔgIFԥX9ڍF`\Y,w6@a ݁3ٷ ~'f 0P,U![m-r=rπ?x|zYl}d(gotA-KSEФѦ6־j\P7IYkI 1R5^>,N橼<@e/pz* z1>:?}g*fg%-hZk*uB[bY1&WC 7QwY4C*Ln|K*ҕڕd_P^e+ҁ#f g~:7X!~$P/S Exz28A A,lڎ tZ6ظ{WƗHlqv%󏯿[ީVjߪ@FBMzڤtN/J:ÇQյn\](줬y NA (|G+;=<?+$B7ѤS\jtTNfˏ, = !YI&2B@,#MtnMc:Grmևg|x1+)|[R 㨸JW%2qZELYV::$TcJH\p3ԝæ'IDfY Z9g)٨j@튲"X92O&"amlΩdդ~q M!$KrCeaQ`Ozif#J5+gV ڂMtdy78[20]k&Ap. kAX(|k]/Fs(MǍJ@吋*81Tj%}G zUӖFfr9E{+U)qΊFCMGތEGF cJ PXV16A$PO4wܴ?4ppxYX(^ʐ HHgy 0wVIi{w2'aʝ?5\ڇ:Z? ˺ 7 Qq^!(Cy!VHXj^V4>dN;qn6+?M&_7f҅ o+@ n`+e<7ḓ>O!Bѕ8%~6XlxG|6 08Ec8tt^w$yNOl{Cadn z7}8ƵiR GҪ-'p2]|@}=l`lv*vθ_hCgSX$\ ߔ4/c'vN>CW@wV-op{P ~7vb|dHCeؾ].9L}c-´:/gYNWZuiϷ]8X)~}_ƾ[Nz[_> n5osJ0_+ YO6Ԟr_76)N|zfOqXթ1A.&T_x6G&Ig|% IvCݐd7mɾ ǧ BEPKX[dP^*BU5i2͊//^ ֩Hu<͟\I^{-Pdm§rJ]"n%c~"U JBsu`@yllRVCi vW݅ZV[=BMDŽ9.;޶_宸(b(M]Je#͒LђBD*&)v;3k҄f{YB-?.?~;e}RR8ۧD`[PW-xqK'%ECRRm⣶ZSQR/,-EuIuJJ=Y @?NY0 صI2cHD>NF#wM+nWB_Q"rx6oWyW?MSLӁe[FWg= θ.kMΰ|"yIj-q'>xʄDG5|Qw` "c ֳ8>/R"mOH|ĭ!FEob,HuHT2(r3A'QJ&=FszUf`r3= 1IH"vUUy K c 8]ɜdIH>艄J (2y-Q^*;cJYt-vli-JKôn! ϹI;J4nycޔ_\`c5niLk'ʫ*Aw^z-]=wQ^ FC8Ӣd^ĝKq4t/* Y,dZPO+Fi+fV&G4;R2.A_[|ZySk@ .rpLO5~#8<'sG4G̀Ua :wQVzp5^%/V Mk ^c?]U¢၍~ d5;hBQON)T=5! ܝ',?=fOOcpyxIPON |}SQ\/ =,Ztc%dPJMYKVWʑ`1DZD 9Q%\;CK,uE`5!d(Bxq.*ܮAthq䄸  eu.hoI"ZD!c ῄQڴ`SvJ#d2Eeߚ3HM$wz+1uM9{mqM!EI̍\JfH&$BdI"qd%d,X(yPՔPCBX?w m(wY*Tkb̽?t[o1(w1;T[ nDχPG,s-$yF fh\v7OS?a'HG΀i&KVߒVfjwFi.-3\BV$2`g A_1ej!$PXk5>IgM yd"eTzjK+3EX C^{A~9cHDP\3X*pQ(.DxNVЅIvt7n / !zo 11 g4&1b/$8oDyK'Z[JIjN}bHaz亦dI9S]^C)vTi)q%h.d)*pljT@bX!Tm9̛ۢ~"T~奔pwYd*CٍNCT%cYʑ=TEV)d~^ .*jxKfW uj? +YI) 5B.W$naWCqs^$]LG?ʰ G>\\c%T+&PSIɞ=/bHd9GR "Ԃ‚2\NJ24" P3pWm`R\z<2u6>751 ;J)ۄʱ1s1U+Fx=V$ C$&TRR)KuYZdi8ޏWWU&㭪>-Ǔ%;ρ?hW{7;N?1ZH/x742HWb{yXwnm\̹p޼ӎ6|0$V%wPl) b FL,Q(p.(p @w𱡩iz<oǓ}㰂) V_sk$wXy >xBh5 <0%EMq: J'{SMn~ze{8dJT޿zjq $x|쁹rsw쏆_\o|}h&t wo9F')_h#,x g>R!hI?:wӯϥ~ky#&( f YpuTR`uK"-R|5Y6i scd|=F.:ͻrZ`DdZ8 {[B\j`-ZxW8ڼ]6˛*н~- hr4B3PzwSAphdؽڟlC=(,haNP9217\NbOD,ig37|_ᯐ@T9dX_Dhhhh9GQ"%2Ω@4qĈ(+K E$!8t+ޫۘtysN݅.: € ڞx쾫xA:pIi]eSط馤E4GyP*,+ FI5`kp.I[ [0w8KuKb%CBMR{3õZ4T(i(^w/>"#)IFSg*\QO}BELCpa6IbL1!2nF) YI2."JqJ6Z%ZbKҐwB<݈=Ɖ8K d *$LkИEQ%IH 2rY@oBϹS}dOw,%,|+__޿t5JbJg"% />P_x{wyFP('Q>":oxq:9C^p;W|'5"Iݹ\ gL7u@r0ʜuFsZ>]U@ h7Y>,{z'ŌAŊZɀ3L\SA!œ- O8_ͅ,r&g7f*8ps+ AtXxEagF0?nZ2ާKG_=V_ щ-SMMM; Bz G+@W?UŠtރQX-mm)Fe}yY8 $A2!sR= ]Lߜ\@A~n:$DJ=,-i21B҂屢o?_>W"Plٚ9eKZe8,_y5`|N::[M([޹M]PQYnN]*l4 G.5Lc ]O} #W68vhv>/ Ѳpܝ@s8T~ܹ.MQh|" s7_9l I.Z{mׁf `lQ;C5 CV¦ut׿|{xx)۶] =rT[u#y$G> ܒ3?mp-V]w6| N N?Nv48Uqͻ4xwvQLrYI?Fy@"܍$\ɴ5!4gB}YυL]^P¸_P5{?ͱ2K5$!\DȔv6h4(#:uר}kmFEvۼ/Y,n to|dC$jd/%3H.~U,WBZ43wODn}X7(1D" p.pm&}}ZTs1;îsMwt]һjCe]j~ӡ`s]]/? #8]UL6^tw7~S61ҫoF^4`@_TI*i(RvGm"C/IA-H? ڂڶ :ocT,D` QAb'Fi-Hq&Nbµsv<PDk8Ռ` 9E%hI2 GOa!%IPDI.kRsT ?խ j[ Zjo5~Ou1ql z%;\/l>Ŭ;Z-|"Rd ;Wuaߢ6F#\߻j$we>ĀjMOćTkn?>y+~}%[͜}]PO]P L|q-{5x@~Χj un^ Li6|8%)df3Mn~,ڦ۾x阵m췫kږ"3CX'%Zϥc\#Y9 D}%֤P(:+'iZ\9Q,N3RjH sV_uu¯W1.tqiveWQm>tmRbk,````uW"R `~4 mGַN KV*W]Wr k {ݴJ9H L_cwJCS!5cEE_LfX5Y0l3X<㠀YMe4~b"%jvu_/ұSSXj2=ODDPnGBJ:ֲWJ\g|tġZz=!5FR w M>&ZGD.F4x؜}iΝ2ވArhu'qGV6 zf4 *(Ѩ{(clyѥbP-ko?Qɬ D&V H$c8'SSf,9`egg->異B/[-ҙ`K9<`W Ս$tѿEfuP_uϗO. 13c \M~>WGrw~%'H{C^ =n0Wک3M>c 4;zgsgsgsgiS8‘Αdwv ܾF2=O .[w*@ٖϺA3R>fUuOk3½:8#yxeiriU +Pi03I)%d:+dBq<) pt@xa,U,`rNÃر#?WaD~hʥU U&iwb!˭ˤzja^X!Sb&%P# `J3j2Π`,t2"[HsN%Fv\5M|otǏ00enFpB"uF>LXǛ[,@D e ܱq&XXI\Zb+Ѣ(jLw[|~e6w99ܱ놺`zV7 ͮoUG}d)g!vǥ>ՍppW OI]9.:~3Tl~Aw, g7)0T:_)sևQU%\U2.fw )|e+k>8Tc ϰ ڋvwF=_?Em<廵B<`"Si";i!l$j=7 uh*Oԕ5hxƫ7fFtub4Ӝ"`(xTyae{l%󛩶x^=>@m)i{xnZv1 ( (&ŅP idF(#9О:ƇGu !!C`Z ҅hH ^ W*LrsFtZakkFwKO?5VXSmOwԶY6^9ڳeM4ʦ>л[,>6at`uZػgxz>,䍛hM.jnAObc:n"$[|βwB޸(ĎG?F?h !VhE?x b1qR){,I^O\SlךÑUFSU|VE Kkba x8*E/nB}2N lOqXb-XF¡JU;YJ㩤{U1:QN*Ք㤗NpD#ޭ?:t0W2tw7k-K%DjQ(+pβ;vِPe@Ym1iy)%EE+Hxw^9rTC{oNC4D{fT$dwoCyĄwo3 {Hڦ,ȹgxR0xAEuW7Prrrr^:hjJ"V U^`%FPPp]pBKkba }Q];nk9> K(4*emgYUY"LJWZJiX γcrB!2U "=B_$dK#E鉢LׁbI(127 QHi&xH\\Q0-hĸ (vJY`'$sLw68ZDI<gX;m'PN)B@Dj swM=2_*)\iq{FHq\T =XGoCAYN#KM▚=s1]!+#;ηAcCL(Jbݨ /)7G%{rC=) ]}ݿ#U6nM_ UGwq0 m\VZO7Qmx56.L[ cƣi6GbƅisT >yn<-r.H3 B osPp$dX0 JJɎ'G'B POz '<=A:ٝ4)&YjJ3ě!Μǘݥ) s}9&3OdKg]Z祈O ߄4Epľ#h:' (>)B޸FTiD')= X L'Am`锏G|z7nQ6%O? <%ǃJ?YݭaQ5!b:A~W9\Mr$KXADeYhjhY:xɑwg^˔y~;ǐ*NR;߭Wu@^r#Fz0~  Y:]a: zL, xXN OiEʕ"O*i;}Ї}B 27QF$3$:WBP\0Zfu;\~\fg:/D^0CT\B IA*ЙҨ1𒪜B$D^ ;OJT#~v$OGLPv}0< Cm[~k-s.[?^-:Gpj1>0/1 ]:c?)y$14FӔ6|Ƽ¸x)nd 3(fJ3s,|T7у Ta @Hw@mN4HPr`m W(hSh--TpsyӮ X-fdWdɸەYZ8ou~c׻v L2)2cW ɗ/=t{٭ xj^]wWvgG f9prAj8?DR(yuyUv=[lEk lR LۇݤjPuow9g;Zh!Z!{J5iz”dN z7 yT086+bڿ_ q>QŅxgcpK-AV: ]o[;sWl!w".]7 i+-GRPCvHS#@":< )uKuBz;xٯֶ}?5kY~S$(f{{Ȯy8.6mƱ[=vcj%ؿJGgi ҿGJ#UT]v]jG'7h[^j<2Z[zzZӌT# 5"sG)?۪0D*^0{ǰ~?r#յ(z9םvWb|ͪx- ^^ܡ뾱F L)*B@$dMe%!0HSlǾ@~F}<7@FW[m_򷧱+y]<91 px'TuyIe2yR#9+c^Y<0>J'].^,R1"AE iWϪw#yGZsW ݞMɿ{溺E%+gTZpyedrq(>HS*9O$llRbĭ)k7\qJ@k;ؼa$vv`r}* 6!"S4kK7ٻc˗:ĞZzbWqOpEMǮVJo@ 1^,? o]]@ 8ݫ':{m#qtμwk[&vBv/ҽMކv{ec#LymTn%ߛ/66_Jdu{a/vŞP}5܂_0$CA#H ry;/87 Z?-漝0J=*/.bc%gcKRvˋ5ۨ3Ϗm{)#O.vXy<R@xɗwrW'p{[Dݓg_[_d']JmzJWXf^G&J>ۑCQg^Q(fyIne6XKbi(.DSF$3,m;7KYz:kVBIP#bPkJPmR5Vw)}R MJWkߥ-Ji[oPuQլ?=ؾ]JߤF)5#>hlCG\,õ_,+f} wE<֙J1ʢf|YI͎dО )S AyP;M2.U|Dx9Bm( wNjr5'AM3c0؈ ̬&V{ ̨B̑<3_\$pDLo+웅?8E `dbR%5L; 2BLH9iˬ4ٵRIK{3nq|v He^rd)_;L%\&򀜩S52$5}qR^U2ebyUO6NJ,֠LHzA`!Ady)ENwD$ ?c>EN|5-$598 D;W z1bw~%@3'/Z]vhǐ…D@'Mߕ*ytѨTy'm_+xƍ?ނkiY ¾bG##c'E E Ae&ݿDؐlq&p*d 7L * rY 2A0o2($%/K)\/f"3Em@HFwdFjނ!}飄z`_~a/1PJt+`hd@qMat1ӋW}@IR5@o|FkXol8i͞\~7hBm\X۫LpF.~@(;>z_9'B&^ي'^d푠r aR dXR^vpQBpu(4٦\?0b{dGTK dRA <)aO$a@Dxp}l ]bdrЙtY -&U 'b(ȃbʛ-ȈM:q "E 3%\h$OaŪ?^Tے]\):&C|-emq\ ?)e^|gV5th7{Cdߖ7U‡0]]~ͼI2Kt5ReVS A/\/5{h@$.ÀLWS@ !6dqR/Cx?j_/mj ]EZII3h-q]󨙷IEW`FKzUfH?8OUuCSRk`YVR _J1GKV 9\ƙ;7$ xDRG!y OvF0Ƙ;p-ojfSb H3%\y'cOV߃97{8;t AG( ~sz9\LRJ&Q&%fy ,Z5R ]ƉdxirrɃNuՇ/d]]$w=^W>z_twaz'\({ҦƎ$/;qIuF8 Xv{>/u :|B<'Zh ^YYyЮkraNO6۞bQQk+^#us8Eu{Es"-{HDKD +)(\=ڃ ]XmVt1lO-.|N¸6lRl`fHi ݇yyB5bޥ/`g4u ,2T}nf9qvr)x2#EcK1Xf *x3_1sv32Yɤ cԠH!P]<0X||o,^Xcu+&L |UpE$}"8 zD19v Ϲ VD{Lpilehu>ˠ>U9;]ȆCg6y|jWIu2G\791 4 GJ}/U;_2Ü: wE,51ɔY&aTq>)©tNPWͤ-}Ԡ*8b<>vZYo:NcC)Rlg%=__ i ]1Yd} y6(bABA5M wRjWBZ&Y01Cq]ݵ /1Ϯ8F& Q]\3\E;.w-}TUKdS>VT׻(s!z-y4xST)&Y :6Q*yjtǣx()v\V)g#e4hp;6n hVI/ N0q>S!M|m eypjerϱ;p=:>'q)_Et|΄avi C{HSFs22 ~3r9gE["ݾ{o* ze2Q9"5swހ\x+5UƊ]tx2H:urڙ`$L*@LLxQ1Reíc;CrZ* Qcxk9NJ@)"x܊ rgiwI LDb^-%w p-Up#1ˇ&2I+PRBP&/6I?ojN)6*P!v] Eqt7ySkECw_/̍O>7!\Ar(8zRyo)2SC^hZ|o?yxP2AE6Eo~>~Z)Yp̍7XEPߓ*[lm3O9_dX>{}3EgyA R_tJN=)<##.?6Msl9 RԻY0RADk)~=_V?m=-W99Pa>D|v1LpҎ8u=A|BɉImWñ1Y&q6J/6TyDw\/T1zGTy9:~s5<^< lx{ћ΁szJQP1^[/g-j5du*9PZp&fYmٍof_Gh7zuD?"^|(П᫗$'@0\qdY!GVȑU# d|L('BbZ)n3KM{Z^k3 xySY8݊}a#[i+S'y6O Ǭ~fovcôpG(̍mpkqnFo?g ݵ5w7\q*yxA2^Hi^T"r)ROhzpyʯ6(pRf%|h,c4bW:a;}}B@9>8 V4j ,|n+aˬUe(3ń,Jjǚkb.J8=s4)ޗ`B8T-R}]~ww-}^.ZEF\ʵfCj@@ᝫLs#E<2Q,3{JY$ҊH,$x!P9 ^xEFQ+voт\wT:(=LkB aݫ,[2Y4r쳀|R-\@B>B7E[%M_ל|-(lVl2UAD{H$ *`e@"Qg˴ uKyE3~Q, ;_(揋,?KEQXHyWNQuc~|t& ɹk|;~rv';PN1ĿS{`& 'ْhUA-bu{g}0F~:RWK)8XT `%ǰz=C58]Nɰ`KZTpjݝ#vǰD;5/Zxg%bZIW2J[ ؅!uЫ:e`SM:N`ۘ;W\\L+k0$8@Zci`t;l 0X`wh%'q<9Dyt!B"N2ۂnǚK]pVpi,AoI_" B Vbh*niP4b1Ɵlyʺ4['i;޵4Il/s4d蜔Q$jܲ2bgªF¢'ݾ`6F]޺:s¯tqUpYe2En)f"ϰvr{']3|g-'Cr~8l}u +h+XN*a]EzͰ_&m}#0Ә)lOە8l^ &wWh~b}Dclg݃=D9vjxtR[?g?9:8^]>&~d79.t#%K̶<xmn>{]׽ /~4OnA-I㻦V,/Vɩz(zN)jӄ;o[j-<ն$ƒ2z= vN _T9LC1{r9' eI֬9ታw: ¸ u&FRn 7@̞<6E9u}k}V=qG/F`Iqt=ioǒЧސ}j&FM4y!%(J /԰Xl!\0|m r[{Pr 6[rN׭E޷ܼDRV:k2OT`M g}Ԣ=ĝ-6 ޝ.RfC$t-;&,eeY ?v緎dtW`9;UsX<؊SFtsY{ yX‹X PKɎ\; NQR\[JPJTN6|Lu4HQh">Q*6GOjHPSI%&q>"d# %HpLw%8&SO[Lg~ tά `rh#gMǍI]gjܳơ?48 ͭEqƳcm[bG:)vqTmoq +)@O+8Oc(4v[ &`"'U-ꢇ) %zuQ\=yV81,Sݛ&N=Τϕp,..ZhPÀ̓ru[-hgbuqcxHm)~pfP:k7i}${.M0VeiXfru߮b/_1+؟Qf܍ A,$X FYi)fTMḋ=̽W86a)˴α2<T6fENGc͛;I ^oa[%Tyv!6qdI/RmE/#(gRyCr8F&Sq6,d}rwMqC[6vݫ-KJr/U:hN#0rOE8E DRgR+] ?q S([6%n5_ĭF"- /%n%1 XR H,LR]OA}j]w<<()=3>k)}e"^-I)&oD^HSP_,CEx((i\\>2TjRzRJERQTw+RJEP3vRh.ťֳRh]4b(%*qD|9xti%=2w;o~VB0X?izI\ 4x͹DV*bN3kO-٠ жI4hd-*+Q'D! -8wWIO8f^#qN)"EΕF8jǗX'񹶄OK.@s4L |+!"!tXoXJ3m 4FqKxK܀wgy&b=!a PNB+`yȩqv҈}8CuH K^Z0LK3 ӥ|?pqy$I{({_נ rMѯ P+Ś^})̿" Dq;mˢIk2]8;1^!Փ;YymY,,'_2h]9%Pi8C>,rwVm~1׀ҭ~r85tu3B[޼-L[f2]A1 ,)hKhD%룰%Z Kn}j1EK4D 9C9[;hҺTSakbX@-g-ۭ_u-̖i pVx )8(%J_l#6FJ  _ű}u;^\(&9L;m;w'>B*&ްGӯ;;a1-B11P|n3/'0Io+]!N%W;S֎+6#F¨[X` 3τE8i $0d+uy-FZizρ]]-va]?3=j&x (*l-~fqW5y3n-;FOċ6*oG QOK :(ᨗDh-%KLv䤳DxY t#e)b|M ,lі!-Yl(6\X9U ( V6α)-J$T.SXv,vߺ0MͿkPr4|p(|p+ˤV3%p :XҘ\:df8><܊k(ѫoin `3ۯYz>tUf%0y" #-]l4%HF+s2Bάb1CLHPO)Rz uW-;Eq%] ] 02V{{Nx;iǫRaKMu69;i<'b܁>\ӟ^}mD 6ҫ1&{`>괳yRs`=vs|-_yl Φisg }ںKM[=Z'qk-<^zV$+Qg_.T?k@ I= "u,+#+te+RMЂMּsqs%[ҍ{ӑq+eQH8;ιqQnZ)!h7jgd}BsT z/M׋_6EMmFnڬ Z,'P::T"TI~p\ ť7[h|m*ǃ)\2V^]bW^]\ HNIe7) n U{ıv~#DGޱ$ \})Dlݣp`σ_ "Pl6 nΰl]P<;ڭ]10ڭ7eop 5_퇞g~H1+>̓h  sy~Pcѻ RۆĽ~TW$r@}*<PJ= +Pvlv|[=W)f#r̊Y12obI+ۋT#ܣ'#|Izxݫ TĢvg+_-hSz'Wr ~Ey5 +pTQ$JC -rsq%4R;ZSW@LxmlH  R{ TWk;u%u| 0ɺ ;ԀZaDwXTySԌ;'<S}Gʐ`@)һ+h[u8.TR(0slJdٍre *JV@5LDjd!O*&h~yͨ׭!, #L BѽdD~ n ,|هozȭ{E;y Eݗ-em˕fKH8H* ȡx!y g?q.n8Ǩ @uqc폏!٭4Ɍ 2mtC$9FXuCǴ3)̄$e>HsШ|HUOc[u} QrO5\U8Nl߸XxGW{ vvtQL'&~dh-gė)J<#s:idx&6L0˦!ܝK8r`x ,1{q|5".:N+ko3D MEҡ+'W!64F "99e*c+sZ2 {aa}T3\ *)S)gIwv =AҒKhG S H;+\ mi~Q_dD&FͶSFkWxC?#SQk}QBC>ܸvy/ Cg9C|l9>`?Qj[Wl{\_rG"8VǮ<޼D+ңKrw7+KVlF_VL*3~U& EA'O >X;ŽHp!)}'޹N*έw6u0_V!|髧["=>Zq|0<X&y H[),1*BF",9fII!p(s&5TaxFr[b(~fS%tS< >$cpm كexF{v/\<<6w~qKM umL2/QUЗ?ϙ~}\>\_c"7J-}wuㅓOq,qg??ri8yrd$$ju3 ce 9!#tIMSLɂix GR!7"Ū$7? 4)ո >ό[̴{ZKj2Y xWU f1gXPw)ZhS- kh-4p'߆|Sb~[,%6mfMy9a"\!5_-6'nf{7x(;991(@NQ!P-~=~+ȸc) 2E~wJ̅7 G>|d%BT }ޮƨU˳sm#3c$Lŝ1r'BN!XNvƉy ݡ[h C:1ܢ!+ BL,yln,ESkkiUAS]}"adS?D-qQLòy0͕TBZ8R S M$DI &NQL)WPK]8Q;F OlRQ$)U)6f#j+KuHp%\[c3PGJ: #=`dC2vɊN`T{c1ēaw*%OITnCڮONP ]$$8Q$<߮fW ҉k+R!' R*$͋S1KoW۫p?h+UDs}9燊%"OD{EA)ưr&RNB囑4#*<6|w6|7c6`ln3e҈cHү79eGTnE7E/m&=^~~g,eEHK#"%K5n0Ο6HҚk 2U$j췵׆T5' X ְZoLSfV#kϳr/BOff;Ogb%RC=QV t,88:}Nn#TqB򐋠ꅨ<2o]hf0fCeX\  rt:8Tf)[Kɔ3_JKnmgau_2 ~\|ʣbiLr<3oZl9JD ̑3/'RcsBVș ,)R *btG\!u iK~7 qwݯp@F|s h!?N ! N?VW7%VE*Ri%Bzć9]%d "ǫ|tRݍDAD:H2$>Z6h]?#Pxr.KS7w'pQ?;Aae=` pdn@{~] [8t7){20LP#5JS'ook`Wzy;o]nۈwO_9#{-Rg໭^Z,7"gX?MVpaC#nZ<*IشoeP람lmk'd8lmAxѺaA|)зMPG~K<ejAF@**eB3MsL[9TS w˚(g/4WtƧ9וz˺y'Nhb:}bN ͺ{DZ+W^:Eٶn ֭'QӳNtY X@C~*)S|bb }WӟJ;':A{>?N心uUp:7{̗n/gzup)DWm_ƛCvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005644331115140400111017667 0ustar rootrootFeb 03 12:04:41 crc systemd[1]: Starting Kubernetes Kubelet... Feb 03 12:04:41 crc restorecon[4682]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 12:04:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:04:42 crc restorecon[4682]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:04:42 crc restorecon[4682]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 03 12:04:42 crc kubenswrapper[4820]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 12:04:42 crc kubenswrapper[4820]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 03 12:04:42 crc kubenswrapper[4820]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 12:04:42 crc kubenswrapper[4820]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 12:04:42 crc kubenswrapper[4820]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 03 12:04:42 crc kubenswrapper[4820]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.901847 4820 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913513 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913562 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913572 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913581 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913594 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913605 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913616 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913626 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913636 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913650 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913662 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913675 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913685 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913697 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913707 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913717 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913728 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913737 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913746 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913756 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913765 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913775 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913784 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913794 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913804 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913813 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913822 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913843 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913853 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913863 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913872 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913882 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913931 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913945 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913957 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913968 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913977 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913989 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.913998 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914009 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914019 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914028 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914037 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914046 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914056 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914065 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914075 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914088 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914098 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914108 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914118 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914129 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914140 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914149 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914162 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914174 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914185 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914195 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914205 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914215 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914226 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914239 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914249 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914259 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914269 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914278 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914288 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914298 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914308 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914318 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.914328 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915329 4820 flags.go:64] FLAG: --address="0.0.0.0" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915365 4820 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915389 4820 flags.go:64] FLAG: --anonymous-auth="true" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915403 4820 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915418 4820 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915430 4820 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915446 4820 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915468 4820 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915480 4820 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915492 4820 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915504 4820 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915518 4820 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915570 4820 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915583 4820 flags.go:64] FLAG: --cgroup-root="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915593 4820 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915603 4820 flags.go:64] FLAG: --client-ca-file="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915614 4820 flags.go:64] FLAG: --cloud-config="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915626 4820 flags.go:64] FLAG: --cloud-provider="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915637 4820 flags.go:64] FLAG: --cluster-dns="[]" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915651 4820 flags.go:64] FLAG: --cluster-domain="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915662 4820 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915675 4820 flags.go:64] FLAG: --config-dir="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915687 4820 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915699 4820 flags.go:64] FLAG: --container-log-max-files="5" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915714 4820 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915725 4820 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915737 4820 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915751 4820 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915764 4820 flags.go:64] FLAG: --contention-profiling="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915775 4820 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915787 4820 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915799 4820 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915812 4820 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915840 4820 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915851 4820 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915863 4820 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915874 4820 flags.go:64] FLAG: --enable-load-reader="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915920 4820 flags.go:64] FLAG: --enable-server="true" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915934 4820 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915952 4820 flags.go:64] FLAG: --event-burst="100" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915965 4820 flags.go:64] FLAG: --event-qps="50" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915976 4820 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915988 4820 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.915999 4820 flags.go:64] FLAG: --eviction-hard="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916013 4820 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916025 4820 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916036 4820 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916048 4820 flags.go:64] FLAG: --eviction-soft="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916060 4820 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916071 4820 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916082 4820 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916095 4820 flags.go:64] FLAG: --experimental-mounter-path="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916106 4820 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916117 4820 flags.go:64] FLAG: --fail-swap-on="true" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916128 4820 flags.go:64] FLAG: --feature-gates="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916141 4820 flags.go:64] FLAG: --file-check-frequency="20s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916153 4820 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916165 4820 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916177 4820 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916190 4820 flags.go:64] FLAG: --healthz-port="10248" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916201 4820 flags.go:64] FLAG: --help="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916212 4820 flags.go:64] FLAG: --hostname-override="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916223 4820 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916234 4820 flags.go:64] FLAG: --http-check-frequency="20s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916246 4820 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916256 4820 flags.go:64] FLAG: --image-credential-provider-config="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916266 4820 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916277 4820 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916288 4820 flags.go:64] FLAG: --image-service-endpoint="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916299 4820 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916309 4820 flags.go:64] FLAG: --kube-api-burst="100" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916321 4820 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916333 4820 flags.go:64] FLAG: --kube-api-qps="50" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916344 4820 flags.go:64] FLAG: --kube-reserved="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916355 4820 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916366 4820 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916378 4820 flags.go:64] FLAG: --kubelet-cgroups="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916389 4820 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916401 4820 flags.go:64] FLAG: --lock-file="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916413 4820 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916425 4820 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916436 4820 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916453 4820 flags.go:64] FLAG: --log-json-split-stream="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916466 4820 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916477 4820 flags.go:64] FLAG: --log-text-split-stream="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916488 4820 flags.go:64] FLAG: --logging-format="text" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916499 4820 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916511 4820 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916522 4820 flags.go:64] FLAG: --manifest-url="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916533 4820 flags.go:64] FLAG: --manifest-url-header="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916547 4820 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916558 4820 flags.go:64] FLAG: --max-open-files="1000000" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916571 4820 flags.go:64] FLAG: --max-pods="110" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916582 4820 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916594 4820 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916605 4820 flags.go:64] FLAG: --memory-manager-policy="None" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916616 4820 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916628 4820 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916639 4820 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916651 4820 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916676 4820 flags.go:64] FLAG: --node-status-max-images="50" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916694 4820 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916715 4820 flags.go:64] FLAG: --oom-score-adj="-999" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916726 4820 flags.go:64] FLAG: --pod-cidr="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916737 4820 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916758 4820 flags.go:64] FLAG: --pod-manifest-path="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916767 4820 flags.go:64] FLAG: --pod-max-pids="-1" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916777 4820 flags.go:64] FLAG: --pods-per-core="0" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916788 4820 flags.go:64] FLAG: --port="10250" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916797 4820 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916807 4820 flags.go:64] FLAG: --provider-id="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916817 4820 flags.go:64] FLAG: --qos-reserved="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916826 4820 flags.go:64] FLAG: --read-only-port="10255" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916835 4820 flags.go:64] FLAG: --register-node="true" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916845 4820 flags.go:64] FLAG: --register-schedulable="true" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916854 4820 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916872 4820 flags.go:64] FLAG: --registry-burst="10" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916883 4820 flags.go:64] FLAG: --registry-qps="5" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916942 4820 flags.go:64] FLAG: --reserved-cpus="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916963 4820 flags.go:64] FLAG: --reserved-memory="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916980 4820 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.916992 4820 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917003 4820 flags.go:64] FLAG: --rotate-certificates="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917014 4820 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917026 4820 flags.go:64] FLAG: --runonce="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917037 4820 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917049 4820 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917062 4820 flags.go:64] FLAG: --seccomp-default="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917072 4820 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917084 4820 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917096 4820 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917109 4820 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917120 4820 flags.go:64] FLAG: --storage-driver-password="root" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917132 4820 flags.go:64] FLAG: --storage-driver-secure="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917143 4820 flags.go:64] FLAG: --storage-driver-table="stats" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917153 4820 flags.go:64] FLAG: --storage-driver-user="root" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917166 4820 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917178 4820 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917190 4820 flags.go:64] FLAG: --system-cgroups="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917200 4820 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917222 4820 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917233 4820 flags.go:64] FLAG: --tls-cert-file="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917244 4820 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917259 4820 flags.go:64] FLAG: --tls-min-version="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917271 4820 flags.go:64] FLAG: --tls-private-key-file="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917282 4820 flags.go:64] FLAG: --topology-manager-policy="none" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917293 4820 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917304 4820 flags.go:64] FLAG: --topology-manager-scope="container" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917316 4820 flags.go:64] FLAG: --v="2" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917332 4820 flags.go:64] FLAG: --version="false" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917347 4820 flags.go:64] FLAG: --vmodule="" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917359 4820 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.917372 4820 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917612 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917629 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917644 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917654 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917666 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917678 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917687 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917698 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917707 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917717 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917727 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917736 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917745 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917758 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917772 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917782 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917792 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917804 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917815 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917826 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917838 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917848 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917860 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917870 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917881 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917926 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917938 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917947 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917958 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917968 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917978 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917988 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.917998 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918007 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918017 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918028 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918038 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918047 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918058 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918068 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918079 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918091 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918100 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918111 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918121 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918132 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918142 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918151 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918163 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918175 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918186 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918196 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918207 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918220 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918234 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918246 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918259 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918271 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918283 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918293 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918304 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918316 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918327 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918339 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918350 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918361 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918372 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918386 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918400 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918411 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.918423 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.918457 4820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.930612 4820 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.930671 4820 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930808 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930829 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930839 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930848 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930857 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930865 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930874 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930882 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930912 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930921 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930929 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930938 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930946 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930954 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930962 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930970 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930978 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930986 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.930994 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931002 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931011 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931019 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931028 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931039 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931053 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931063 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931074 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931083 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931092 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931102 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931110 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931120 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931128 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931137 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931147 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931155 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931164 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931172 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931181 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931190 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931200 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931210 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931221 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931230 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931238 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931246 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931254 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931262 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931269 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931277 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931285 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931293 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931301 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931311 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931321 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931331 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931339 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931347 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931355 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931362 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931370 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931378 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931386 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931394 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931402 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931409 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931417 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931425 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931433 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931440 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931449 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.931462 4820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931727 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931742 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931754 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931764 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931773 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931784 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931794 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931803 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931812 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931820 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931829 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931838 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931846 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931854 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931862 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931871 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931879 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931923 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931931 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931939 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931947 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931956 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931963 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931973 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931983 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.931992 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932001 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932009 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932017 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932026 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932035 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932043 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932051 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932059 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932068 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932076 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932084 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932092 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932099 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932107 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932115 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932122 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932131 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932138 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932146 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932154 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932161 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932169 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932177 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932184 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932192 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932200 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932207 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932215 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932222 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932230 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932238 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932245 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932253 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932260 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932268 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932276 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932283 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932290 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932298 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932306 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932316 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932325 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932334 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932342 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 12:04:42 crc kubenswrapper[4820]: W0203 12:04:42.932354 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.932367 4820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.932674 4820 server.go:940] "Client rotation is on, will bootstrap in background" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.940142 4820 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.940323 4820 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.944706 4820 server.go:997] "Starting client certificate rotation" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.944772 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.945166 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-12 03:59:15.633681078 +0000 UTC Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.945307 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.977789 4820 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 12:04:42 crc kubenswrapper[4820]: E0203 12:04:42.980298 4820 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.980883 4820 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 12:04:42 crc kubenswrapper[4820]: I0203 12:04:42.996918 4820 log.go:25] "Validated CRI v1 runtime API" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.031545 4820 log.go:25] "Validated CRI v1 image API" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.033875 4820 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.038930 4820 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-03-11-59-30-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.038969 4820 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.054554 4820 manager.go:217] Machine: {Timestamp:2026-02-03 12:04:43.052750664 +0000 UTC m=+0.575826548 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a4221fcb-5776-4539-8cb5-9da3bff4d7a8 BootID:83c4bcff-fd36-4e8a-96f0-3320ea01106a Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:3f:f9:00 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:3f:f9:00 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:49:24:aa Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c1:1a:b7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:fd:8e:a7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:31:47:77 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:8a:a5:68:82:58:7b Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ca:9b:cd:cf:20:a4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.054921 4820 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.055153 4820 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.056010 4820 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.056267 4820 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.056325 4820 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.056601 4820 topology_manager.go:138] "Creating topology manager with none policy" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.056616 4820 container_manager_linux.go:303] "Creating device plugin manager" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.057091 4820 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.057136 4820 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.057348 4820 state_mem.go:36] "Initialized new in-memory state store" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.057463 4820 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.061156 4820 kubelet.go:418] "Attempting to sync node with API server" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.061196 4820 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.061228 4820 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.061249 4820 kubelet.go:324] "Adding apiserver pod source" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.061266 4820 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.065130 4820 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.066322 4820 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.068782 4820 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 03 12:04:43 crc kubenswrapper[4820]: W0203 12:04:43.068909 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 03 12:04:43 crc kubenswrapper[4820]: W0203 12:04:43.068924 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.069016 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.069058 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.070433 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.070526 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.070593 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.070651 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.070712 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.070762 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.070811 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.070905 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.070966 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.071019 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.071089 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.071147 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.073112 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.073940 4820 server.go:1280] "Started kubelet" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.074592 4820 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.074838 4820 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.075330 4820 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 03 12:04:43 crc systemd[1]: Started Kubernetes Kubelet. Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.077062 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.081962 4820 server.go:460] "Adding debug handlers to kubelet server" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.082538 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.082722 4820 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.082874 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:41:35.263734357 +0000 UTC Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.085084 4820 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.087534 4820 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.090002 4820 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.090065 4820 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 03 12:04:43 crc kubenswrapper[4820]: W0203 12:04:43.090156 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.090258 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.090390 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.090133 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890bb04da5573be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 12:04:43.073901502 +0000 UTC m=+0.596977366,LastTimestamp:2026-02-03 12:04:43.073901502 +0000 UTC m=+0.596977366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.091735 4820 factory.go:153] Registering CRI-O factory Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.091766 4820 factory.go:221] Registration of the crio container factory successfully Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.091827 4820 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.091835 4820 factory.go:55] Registering systemd factory Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.091842 4820 factory.go:221] Registration of the systemd container factory successfully Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.091862 4820 factory.go:103] Registering Raw factory Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.091874 4820 manager.go:1196] Started watching for new ooms in manager Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.092493 4820 manager.go:319] Starting recovery of all containers Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098210 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098280 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098299 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098315 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098326 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098339 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098352 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098364 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098378 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098390 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098401 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098414 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098426 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098440 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098451 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098462 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098476 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098490 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098505 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098518 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098534 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098550 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098567 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098582 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098596 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098618 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098639 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098655 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098689 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098709 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098724 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098738 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098753 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098769 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098784 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098800 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098816 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098834 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098855 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098870 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098908 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098922 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098936 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098950 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098963 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098978 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.098991 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099006 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099025 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099039 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099053 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099066 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099084 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099101 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099115 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099148 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099162 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099175 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099190 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099203 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099215 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099229 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099243 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099257 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099272 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099289 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099304 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099320 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099334 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099347 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099360 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099374 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099388 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099401 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099413 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099427 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099439 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099451 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099463 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099475 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099488 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099501 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.099514 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100190 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100203 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100217 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100229 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100241 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100253 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100265 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100276 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100296 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100314 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100326 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100339 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100351 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100364 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100375 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100394 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100406 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100419 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100434 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100445 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100457 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100473 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100486 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100500 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100513 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100526 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100541 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100553 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100573 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100586 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100599 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100615 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100632 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100644 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100656 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100667 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100679 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100691 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100702 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100714 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100726 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100745 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100761 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100773 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100790 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100804 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100820 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100832 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100850 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100864 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100876 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100914 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100927 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100939 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100952 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100970 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100984 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.100996 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.101013 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.101025 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.101042 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.101054 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.101067 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.101079 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.101090 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.101103 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.101138 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.101152 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.101163 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.101175 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.101216 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.101228 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103283 4820 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103342 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103378 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103396 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103410 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103430 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103444 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103454 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103465 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103489 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103500 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103511 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103522 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103539 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103550 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103561 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103572 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103582 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103593 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103605 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103643 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103658 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103669 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103679 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103689 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103702 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103716 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103734 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103748 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103761 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103773 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103809 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103829 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103846 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103861 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103878 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103913 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103931 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103950 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103968 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.103980 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.104003 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.104019 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.104040 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.104058 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.104075 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.104087 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.104096 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.104107 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.104116 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.104131 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.104146 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.104156 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.104167 4820 reconstruct.go:97] "Volume reconstruction finished" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.104176 4820 reconciler.go:26] "Reconciler: start to sync state" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.113320 4820 manager.go:324] Recovery completed Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.126265 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.128090 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.128130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.128141 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.129073 4820 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.129108 4820 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.129200 4820 state_mem.go:36] "Initialized new in-memory state store" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.137877 4820 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.139171 4820 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.140664 4820 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 03 12:04:43 crc kubenswrapper[4820]: W0203 12:04:43.141242 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.141292 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.141334 4820 kubelet.go:2335] "Starting kubelet main sync loop" Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.141376 4820 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.149767 4820 policy_none.go:49] "None policy: Start" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.150685 4820 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.150722 4820 state_mem.go:35] "Initializing new in-memory state store" Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.188694 4820 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.193770 4820 manager.go:334] "Starting Device Plugin manager" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.193832 4820 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.193843 4820 server.go:79] "Starting device plugin registration server" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.194251 4820 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.194266 4820 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.194713 4820 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.194869 4820 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.194943 4820 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.203570 4820 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.241671 4820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.241743 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.242644 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.242679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.242692 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.242817 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.243037 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.243078 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.243508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.243614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.243677 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.243738 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.243747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.243727 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.244126 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.244244 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.244275 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.245219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.245355 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.245257 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.245491 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.245504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.245461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.246116 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.246227 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.246260 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.246810 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.246836 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.246848 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.246989 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.247100 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.247132 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.247437 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.247464 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.247477 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.247837 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.247862 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.247873 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.247968 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.247983 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.247990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.247996 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.248016 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.248590 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.248617 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.248627 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.292019 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.296178 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.297325 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.297368 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.297379 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.297406 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.298273 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.306905 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.306967 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.306995 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.307019 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.307041 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.307061 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.307083 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.307155 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.307208 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.307282 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.307332 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.307354 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.307371 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.307399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.307426 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.408938 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.408997 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409066 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409083 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409087 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409129 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409130 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409100 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409168 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409174 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409186 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409204 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409206 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409218 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409240 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409248 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409256 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409274 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409278 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409290 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409303 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409311 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409325 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409336 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409348 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409359 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409373 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.409521 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.499131 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.500546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.500597 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.500606 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.500628 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.501424 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.567873 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.574774 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.594881 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.617845 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: W0203 12:04:43.618358 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-eae79a4a9f57d36b6ecdf173cbe1e998870c898917e41bf15f14c50a200cf293 WatchSource:0}: Error finding container eae79a4a9f57d36b6ecdf173cbe1e998870c898917e41bf15f14c50a200cf293: Status 404 returned error can't find the container with id eae79a4a9f57d36b6ecdf173cbe1e998870c898917e41bf15f14c50a200cf293 Feb 03 12:04:43 crc kubenswrapper[4820]: W0203 12:04:43.618939 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9fdf95b3ff9d4a5d2a21ff7e26ac63b339f1fac1b0a6262ab0b78889df8d71d6 WatchSource:0}: Error finding container 9fdf95b3ff9d4a5d2a21ff7e26ac63b339f1fac1b0a6262ab0b78889df8d71d6: Status 404 returned error can't find the container with id 9fdf95b3ff9d4a5d2a21ff7e26ac63b339f1fac1b0a6262ab0b78889df8d71d6 Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.623067 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:04:43 crc kubenswrapper[4820]: W0203 12:04:43.632208 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-07961f0afb9a750888aae2b96ae5063df47a4ab0677ad41d7ddc658132472c06 WatchSource:0}: Error finding container 07961f0afb9a750888aae2b96ae5063df47a4ab0677ad41d7ddc658132472c06: Status 404 returned error can't find the container with id 07961f0afb9a750888aae2b96ae5063df47a4ab0677ad41d7ddc658132472c06 Feb 03 12:04:43 crc kubenswrapper[4820]: W0203 12:04:43.635382 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-663c73372a17e8fab38c772cf551de54c09bad8b5f5d5ae28a72ad7f22ffe688 WatchSource:0}: Error finding container 663c73372a17e8fab38c772cf551de54c09bad8b5f5d5ae28a72ad7f22ffe688: Status 404 returned error can't find the container with id 663c73372a17e8fab38c772cf551de54c09bad8b5f5d5ae28a72ad7f22ffe688 Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.693568 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.902123 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.903764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.903825 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.903840 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:43 crc kubenswrapper[4820]: I0203 12:04:43.903874 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.904502 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 03 12:04:43 crc kubenswrapper[4820]: W0203 12:04:43.925037 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 03 12:04:43 crc kubenswrapper[4820]: E0203 12:04:43.925125 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:04:44 crc kubenswrapper[4820]: I0203 12:04:44.078696 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 03 12:04:44 crc kubenswrapper[4820]: I0203 12:04:44.083125 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:32:15.208184622 +0000 UTC Feb 03 12:04:44 crc kubenswrapper[4820]: E0203 12:04:44.083164 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890bb04da5573be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 12:04:43.073901502 +0000 UTC m=+0.596977366,LastTimestamp:2026-02-03 12:04:43.073901502 +0000 UTC m=+0.596977366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 12:04:44 crc kubenswrapper[4820]: I0203 12:04:44.145209 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4c9bf707bfb6f9cf1112fdb72c1503f884c0c1ab42a19bf6d5722f3a3eac5769"} Feb 03 12:04:44 crc kubenswrapper[4820]: I0203 12:04:44.146247 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9fdf95b3ff9d4a5d2a21ff7e26ac63b339f1fac1b0a6262ab0b78889df8d71d6"} Feb 03 12:04:44 crc kubenswrapper[4820]: I0203 12:04:44.146943 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"eae79a4a9f57d36b6ecdf173cbe1e998870c898917e41bf15f14c50a200cf293"} Feb 03 12:04:44 crc kubenswrapper[4820]: I0203 12:04:44.147654 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"663c73372a17e8fab38c772cf551de54c09bad8b5f5d5ae28a72ad7f22ffe688"} Feb 03 12:04:44 crc kubenswrapper[4820]: I0203 12:04:44.148351 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"07961f0afb9a750888aae2b96ae5063df47a4ab0677ad41d7ddc658132472c06"} Feb 03 12:04:44 crc kubenswrapper[4820]: W0203 12:04:44.272705 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 03 12:04:44 crc kubenswrapper[4820]: E0203 12:04:44.272786 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:04:44 crc kubenswrapper[4820]: W0203 12:04:44.372915 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 03 12:04:44 crc kubenswrapper[4820]: E0203 12:04:44.373006 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:04:44 crc kubenswrapper[4820]: W0203 12:04:44.377748 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 03 12:04:44 crc kubenswrapper[4820]: E0203 12:04:44.377825 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:04:44 crc kubenswrapper[4820]: E0203 12:04:44.494773 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Feb 03 12:04:44 crc kubenswrapper[4820]: I0203 12:04:44.704581 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:44 crc kubenswrapper[4820]: I0203 12:04:44.706267 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:44 crc kubenswrapper[4820]: I0203 12:04:44.706312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:44 crc kubenswrapper[4820]: I0203 12:04:44.706323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:44 crc kubenswrapper[4820]: I0203 12:04:44.706351 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 12:04:44 crc kubenswrapper[4820]: E0203 12:04:44.706809 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.078705 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.084690 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:59:18.670269004 +0000 UTC Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.131045 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 12:04:45 crc kubenswrapper[4820]: E0203 12:04:45.132234 4820 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.152647 4820 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="1fce1459bb4834de28fc3f237906647ea2cbfd0f5dfa72fcdbe5eaadf8d8260a" exitCode=0 Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.152729 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"1fce1459bb4834de28fc3f237906647ea2cbfd0f5dfa72fcdbe5eaadf8d8260a"} Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.152802 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.154265 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.154309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.154322 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.158499 4820 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826" exitCode=0 Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.158700 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.159254 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826"} Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.159815 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.159849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.159861 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.167478 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588"} Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.167519 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9"} Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.167532 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d"} Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.167542 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661"} Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.167546 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.168998 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.169045 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.169061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.169963 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9" exitCode=0 Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.170070 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.170060 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9"} Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.170744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.170770 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.170777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.171948 4820 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c" exitCode=0 Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.172003 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.172012 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c"} Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.172734 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.172771 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.172786 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.173324 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.174113 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.174177 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.174190 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:45 crc kubenswrapper[4820]: I0203 12:04:45.847327 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:04:46 crc kubenswrapper[4820]: W0203 12:04:46.036553 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 03 12:04:46 crc kubenswrapper[4820]: E0203 12:04:46.036633 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.078224 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.085350 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:53:57.946125141 +0000 UTC Feb 03 12:04:46 crc kubenswrapper[4820]: E0203 12:04:46.095998 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.103045 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.181210 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"555f489476174f52e5f5d252e3f3f86f65e1593e641204d863bebdaa80b49bbc"} Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.181282 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ade7b4944a0da0f0d18c5dd5a92ef407d6beac17985f1429d57672ff7e9beade"} Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.181299 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"47438e7dc8d1eec9a8a061c1b6141e39f4c805dfafd1175c07235f7b9425719b"} Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.181302 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.182795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.182822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.182832 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.183620 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"530367677cc447e7f7895ffa6509c296dfaac7c630a2e8471b8cce3e6b0baee8"} Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.183647 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640"} Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.183656 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289"} Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.183665 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2"} Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.183673 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8"} Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.183738 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.184340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.184360 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.184367 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.185463 4820 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5" exitCode=0 Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.185496 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5"} Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.185562 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.186087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.186103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.186111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.188154 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.188398 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.188451 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6033384e8e6a693d5a19d999075b790954463fc62cc0367026e5d2d9f6eb0919"} Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.188859 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.188878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.188901 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.189086 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.189112 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.189123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.307092 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.308134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.308172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.308180 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:46 crc kubenswrapper[4820]: I0203 12:04:46.308203 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 12:04:46 crc kubenswrapper[4820]: E0203 12:04:46.308594 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 03 12:04:46 crc kubenswrapper[4820]: W0203 12:04:46.391324 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 03 12:04:46 crc kubenswrapper[4820]: E0203 12:04:46.391390 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.085701 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:58:46.602200156 +0000 UTC Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.193527 4820 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526" exitCode=0 Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.193631 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.193667 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.193689 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.193739 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.193761 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.193786 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.193687 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526"} Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.193744 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.194993 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.195029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.195039 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.195087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.195110 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.195123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.195146 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.195157 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.195195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.195267 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.195307 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.195328 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.195337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.195362 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:47 crc kubenswrapper[4820]: I0203 12:04:47.195373 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:48 crc kubenswrapper[4820]: I0203 12:04:48.086953 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:04:49.94787966 +0000 UTC Feb 03 12:04:48 crc kubenswrapper[4820]: I0203 12:04:48.201447 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea"} Feb 03 12:04:48 crc kubenswrapper[4820]: I0203 12:04:48.201522 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61"} Feb 03 12:04:48 crc kubenswrapper[4820]: I0203 12:04:48.201551 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9"} Feb 03 12:04:48 crc kubenswrapper[4820]: I0203 12:04:48.201570 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86"} Feb 03 12:04:48 crc kubenswrapper[4820]: I0203 12:04:48.201587 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0"} Feb 03 12:04:48 crc kubenswrapper[4820]: I0203 12:04:48.202114 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:48 crc kubenswrapper[4820]: I0203 12:04:48.203410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:48 crc kubenswrapper[4820]: I0203 12:04:48.203509 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:48 crc kubenswrapper[4820]: I0203 12:04:48.203527 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:48 crc kubenswrapper[4820]: I0203 12:04:48.311254 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.059052 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.059377 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.059455 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.060830 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.060954 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.060981 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.087753 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:02:37.845206563 +0000 UTC Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.103203 4820 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.103283 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.203553 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.204520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.204555 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.204566 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.421355 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.508773 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.510321 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.510359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.510367 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.510389 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.697270 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.697501 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.698752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.698796 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.698808 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.704786 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.772231 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.772393 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.772440 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.773663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.773747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:49 crc kubenswrapper[4820]: I0203 12:04:49.773764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:50 crc kubenswrapper[4820]: I0203 12:04:50.088074 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:46:12.226936694 +0000 UTC Feb 03 12:04:50 crc kubenswrapper[4820]: I0203 12:04:50.205851 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:50 crc kubenswrapper[4820]: I0203 12:04:50.205851 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:50 crc kubenswrapper[4820]: I0203 12:04:50.207422 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:50 crc kubenswrapper[4820]: I0203 12:04:50.207438 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:50 crc kubenswrapper[4820]: I0203 12:04:50.207484 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:50 crc kubenswrapper[4820]: I0203 12:04:50.207501 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:50 crc kubenswrapper[4820]: I0203 12:04:50.207516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:50 crc kubenswrapper[4820]: I0203 12:04:50.207520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:50 crc kubenswrapper[4820]: I0203 12:04:50.439498 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:04:50 crc kubenswrapper[4820]: I0203 12:04:50.439702 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:50 crc kubenswrapper[4820]: I0203 12:04:50.440694 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:50 crc kubenswrapper[4820]: I0203 12:04:50.440723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:50 crc kubenswrapper[4820]: I0203 12:04:50.440739 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:51 crc kubenswrapper[4820]: I0203 12:04:51.089095 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 09:49:45.701223146 +0000 UTC Feb 03 12:04:51 crc kubenswrapper[4820]: I0203 12:04:51.757841 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:51 crc kubenswrapper[4820]: I0203 12:04:51.758457 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:51 crc kubenswrapper[4820]: I0203 12:04:51.760723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:51 crc kubenswrapper[4820]: I0203 12:04:51.760814 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:51 crc kubenswrapper[4820]: I0203 12:04:51.760840 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:52 crc kubenswrapper[4820]: I0203 12:04:52.090212 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:29:12.985427259 +0000 UTC Feb 03 12:04:52 crc kubenswrapper[4820]: I0203 12:04:52.249984 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:04:52 crc kubenswrapper[4820]: I0203 12:04:52.250155 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:52 crc kubenswrapper[4820]: I0203 12:04:52.251426 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:52 crc kubenswrapper[4820]: I0203 12:04:52.251472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:52 crc kubenswrapper[4820]: I0203 12:04:52.251490 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:52 crc kubenswrapper[4820]: I0203 12:04:52.320570 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 03 12:04:52 crc kubenswrapper[4820]: I0203 12:04:52.320747 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:52 crc kubenswrapper[4820]: I0203 12:04:52.321879 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:52 crc kubenswrapper[4820]: I0203 12:04:52.321945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:52 crc kubenswrapper[4820]: I0203 12:04:52.321958 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:53 crc kubenswrapper[4820]: I0203 12:04:53.091365 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 18:53:53.158184888 +0000 UTC Feb 03 12:04:53 crc kubenswrapper[4820]: E0203 12:04:53.203714 4820 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 03 12:04:54 crc kubenswrapper[4820]: I0203 12:04:54.091931 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:32:55.59450903 +0000 UTC Feb 03 12:04:55 crc kubenswrapper[4820]: I0203 12:04:55.092875 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 17:59:31.897873475 +0000 UTC Feb 03 12:04:55 crc kubenswrapper[4820]: I0203 12:04:55.855278 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:04:55 crc kubenswrapper[4820]: I0203 12:04:55.855389 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:55 crc kubenswrapper[4820]: I0203 12:04:55.856423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:55 crc kubenswrapper[4820]: I0203 12:04:55.856459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:55 crc kubenswrapper[4820]: I0203 12:04:55.856470 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:56 crc kubenswrapper[4820]: I0203 12:04:56.093614 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:50:26.840357554 +0000 UTC Feb 03 12:04:56 crc kubenswrapper[4820]: W0203 12:04:56.899307 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 03 12:04:56 crc kubenswrapper[4820]: I0203 12:04:56.899409 4820 trace.go:236] Trace[916117771]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 12:04:46.898) (total time: 10001ms): Feb 03 12:04:56 crc kubenswrapper[4820]: Trace[916117771]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:04:56.899) Feb 03 12:04:56 crc kubenswrapper[4820]: Trace[916117771]: [10.001190067s] [10.001190067s] END Feb 03 12:04:56 crc kubenswrapper[4820]: E0203 12:04:56.899440 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 03 12:04:56 crc kubenswrapper[4820]: W0203 12:04:56.936051 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 03 12:04:56 crc kubenswrapper[4820]: I0203 12:04:56.936159 4820 trace.go:236] Trace[720533519]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 12:04:46.934) (total time: 10001ms): Feb 03 12:04:56 crc kubenswrapper[4820]: Trace[720533519]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:04:56.936) Feb 03 12:04:56 crc kubenswrapper[4820]: Trace[720533519]: [10.001325154s] [10.001325154s] END Feb 03 12:04:56 crc kubenswrapper[4820]: E0203 12:04:56.936186 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 03 12:04:57 crc kubenswrapper[4820]: I0203 12:04:57.078958 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 03 12:04:57 crc kubenswrapper[4820]: I0203 12:04:57.094709 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 19:39:53.958266159 +0000 UTC Feb 03 12:04:57 crc kubenswrapper[4820]: I0203 12:04:57.191419 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 03 12:04:57 crc kubenswrapper[4820]: I0203 12:04:57.191764 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 03 12:04:57 crc kubenswrapper[4820]: I0203 12:04:57.197865 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 03 12:04:57 crc kubenswrapper[4820]: I0203 12:04:57.197962 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 03 12:04:57 crc kubenswrapper[4820]: I0203 12:04:57.225080 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 12:04:57 crc kubenswrapper[4820]: I0203 12:04:57.228134 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="530367677cc447e7f7895ffa6509c296dfaac7c630a2e8471b8cce3e6b0baee8" exitCode=255 Feb 03 12:04:57 crc kubenswrapper[4820]: I0203 12:04:57.228226 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"530367677cc447e7f7895ffa6509c296dfaac7c630a2e8471b8cce3e6b0baee8"} Feb 03 12:04:57 crc kubenswrapper[4820]: I0203 12:04:57.228419 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:57 crc kubenswrapper[4820]: I0203 12:04:57.229201 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:57 crc kubenswrapper[4820]: I0203 12:04:57.229310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:57 crc kubenswrapper[4820]: I0203 12:04:57.229376 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:57 crc kubenswrapper[4820]: I0203 12:04:57.229881 4820 scope.go:117] "RemoveContainer" containerID="530367677cc447e7f7895ffa6509c296dfaac7c630a2e8471b8cce3e6b0baee8" Feb 03 12:04:58 crc kubenswrapper[4820]: I0203 12:04:58.095812 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 22:03:13.468826333 +0000 UTC Feb 03 12:04:58 crc kubenswrapper[4820]: I0203 12:04:58.232429 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 12:04:58 crc kubenswrapper[4820]: I0203 12:04:58.234246 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37"} Feb 03 12:04:58 crc kubenswrapper[4820]: I0203 12:04:58.234517 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:58 crc kubenswrapper[4820]: I0203 12:04:58.235689 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:58 crc kubenswrapper[4820]: I0203 12:04:58.235750 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:58 crc kubenswrapper[4820]: I0203 12:04:58.235773 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:59 crc kubenswrapper[4820]: I0203 12:04:59.096180 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:17:56.165076987 +0000 UTC Feb 03 12:04:59 crc kubenswrapper[4820]: I0203 12:04:59.104424 4820 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:04:59 crc kubenswrapper[4820]: I0203 12:04:59.104481 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:04:59 crc kubenswrapper[4820]: I0203 12:04:59.778569 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:59 crc kubenswrapper[4820]: I0203 12:04:59.778771 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:04:59 crc kubenswrapper[4820]: I0203 12:04:59.779053 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:04:59 crc kubenswrapper[4820]: I0203 12:04:59.780028 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:04:59 crc kubenswrapper[4820]: I0203 12:04:59.780181 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:04:59 crc kubenswrapper[4820]: I0203 12:04:59.780308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:04:59 crc kubenswrapper[4820]: I0203 12:04:59.784013 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:00 crc kubenswrapper[4820]: I0203 12:05:00.097111 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 08:53:16.649964856 +0000 UTC Feb 03 12:05:00 crc kubenswrapper[4820]: I0203 12:05:00.238641 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:00 crc kubenswrapper[4820]: I0203 12:05:00.239853 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:00 crc kubenswrapper[4820]: I0203 12:05:00.239940 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:00 crc kubenswrapper[4820]: I0203 12:05:00.239953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:00 crc kubenswrapper[4820]: I0203 12:05:00.827401 4820 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 12:05:01 crc kubenswrapper[4820]: I0203 12:05:01.025169 4820 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 12:05:01 crc kubenswrapper[4820]: I0203 12:05:01.098136 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:20:38.120166695 +0000 UTC Feb 03 12:05:01 crc kubenswrapper[4820]: I0203 12:05:01.241324 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:01 crc kubenswrapper[4820]: I0203 12:05:01.242269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:01 crc kubenswrapper[4820]: I0203 12:05:01.242331 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:01 crc kubenswrapper[4820]: I0203 12:05:01.242341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:02 crc kubenswrapper[4820]: I0203 12:05:02.099082 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:58:51.032129678 +0000 UTC Feb 03 12:05:02 crc kubenswrapper[4820]: E0203 12:05:02.195119 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 03 12:05:02 crc kubenswrapper[4820]: I0203 12:05:02.196658 4820 trace.go:236] Trace[1720029157]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 12:04:50.094) (total time: 12102ms): Feb 03 12:05:02 crc kubenswrapper[4820]: Trace[1720029157]: ---"Objects listed" error: 12102ms (12:05:02.196) Feb 03 12:05:02 crc kubenswrapper[4820]: Trace[1720029157]: [12.102099789s] [12.102099789s] END Feb 03 12:05:02 crc kubenswrapper[4820]: I0203 12:05:02.196688 4820 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 12:05:02 crc kubenswrapper[4820]: I0203 12:05:02.197335 4820 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 03 12:05:02 crc kubenswrapper[4820]: I0203 12:05:02.197824 4820 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 03 12:05:02 crc kubenswrapper[4820]: E0203 12:05:02.199625 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 03 12:05:02 crc kubenswrapper[4820]: I0203 12:05:02.212105 4820 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 03 12:05:02 crc kubenswrapper[4820]: I0203 12:05:02.347707 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 03 12:05:02 crc kubenswrapper[4820]: I0203 12:05:02.367597 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.074363 4820 apiserver.go:52] "Watching apiserver" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.077098 4820 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.077344 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-etcd/etcd-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.077668 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.077798 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.077807 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.077681 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.077822 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.077906 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.077990 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.078214 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.078241 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.080284 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.080389 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.080410 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.080544 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.080558 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.080612 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.080838 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.081113 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.082865 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.088150 4820 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.099337 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 21:47:58.545099094 +0000 UTC Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.103929 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.103961 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.103983 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104001 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104017 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104033 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104049 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104065 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104083 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104100 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104119 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104136 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104154 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104169 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104184 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104199 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104214 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104232 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104247 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104263 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104278 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104301 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104318 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104336 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104352 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104370 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104385 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104400 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104417 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104436 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104452 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104468 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104485 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104501 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104519 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104537 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104552 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104567 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104582 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104598 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104613 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104627 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104642 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104657 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104672 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104709 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104729 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104749 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104771 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104795 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104819 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104841 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104862 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104939 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104963 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104986 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105010 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105033 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105056 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105077 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104462 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105102 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104643 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105102 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104661 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104712 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104810 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104793 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104880 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.104916 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105068 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105083 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105126 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105290 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105309 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105321 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105350 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105389 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105423 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105464 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105499 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105531 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105568 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105598 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105633 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105665 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105699 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105735 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105768 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105803 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105910 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105956 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105994 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106028 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106067 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106102 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106135 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106204 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106238 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106269 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106301 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106335 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106368 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106406 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106438 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106473 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106505 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106545 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106579 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106608 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106644 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106678 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106713 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106748 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106783 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106816 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106869 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106925 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106959 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106991 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107027 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107077 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107111 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107144 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107178 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107209 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107244 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107276 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107308 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107351 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107384 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107418 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107450 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107485 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107524 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107563 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107597 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107630 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107665 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107694 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107728 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107761 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107822 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107856 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107946 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107986 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108357 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108399 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108434 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108469 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108503 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108540 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108587 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108623 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108659 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108713 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108753 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108791 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108828 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108865 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108924 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109043 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109091 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109126 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109163 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109198 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109233 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109273 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109309 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109343 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109377 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109412 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109447 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109482 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109518 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109551 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109586 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109620 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109655 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109689 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109722 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109757 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109800 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109834 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109865 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109923 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109956 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109988 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110018 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110057 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110090 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110129 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110167 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110203 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110237 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110274 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110311 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110353 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110389 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105426 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105512 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110713 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105638 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105696 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105746 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105757 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105916 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.105917 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106333 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106432 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106448 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106482 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106682 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106699 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106695 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106706 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106784 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.106831 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107097 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107340 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107363 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107368 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107388 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107401 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107725 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107794 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107986 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.107991 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.108631 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109107 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109323 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109410 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.109843 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110076 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110094 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110097 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110229 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110997 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110417 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110591 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110648 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110667 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110945 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.110959 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111006 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111192 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111241 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111277 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111311 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111346 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111381 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111418 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111452 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111489 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111573 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111619 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111659 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111699 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111735 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111775 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111857 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111916 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111996 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112033 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112067 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112103 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112136 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112221 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112244 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112266 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112285 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112305 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112324 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112344 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112364 4820 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112384 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112405 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112423 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112442 4820 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112462 4820 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112482 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112502 4820 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112528 4820 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112547 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112565 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112580 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112593 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112607 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112623 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112641 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112659 4820 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112676 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112717 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112737 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112755 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112771 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112783 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112797 4820 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112810 4820 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112823 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112837 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112851 4820 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112865 4820 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112878 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112925 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112945 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112963 4820 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112981 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112999 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113016 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113032 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113051 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113068 4820 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113086 4820 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113104 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113123 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113143 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113161 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113178 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113198 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113216 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.118697 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111240 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111290 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111445 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111481 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111776 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.111986 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112259 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112339 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112405 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112526 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112796 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.112839 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113097 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113171 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113433 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113536 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113605 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113843 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.125599 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113881 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.113919 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.114074 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.114341 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.114376 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.114484 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.114594 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.114649 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:05:03.614629118 +0000 UTC m=+21.137704982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.114845 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.115339 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.115963 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.115966 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.116117 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.116132 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.116587 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.116782 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.117054 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.117087 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.117102 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.117486 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.117537 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.117977 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.118324 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.118792 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.119161 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.119369 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.119407 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.119469 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.119479 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.119960 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.120057 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.120239 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.120330 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.119869 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.120644 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.120775 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.120866 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.120867 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.120911 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.121028 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.121198 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.121224 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.121506 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.121621 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.121976 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.122428 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.122700 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.122928 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.123087 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.123277 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.123295 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.123466 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.123483 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.123506 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.123780 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.123950 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.123991 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.124271 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.124311 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.124410 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.124438 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.123501 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.124521 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.124645 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.124649 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.124670 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.124685 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.124792 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.124826 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.124909 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.124930 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.124933 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.125063 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.125144 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.125279 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.125407 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.126176 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.126338 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.126397 4820 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.126535 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.126593 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.126606 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.126651 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:03.626634453 +0000 UTC m=+21.149710317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.126922 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.126885 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.127247 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:03.627227618 +0000 UTC m=+21.150303552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.127988 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.128651 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.128827 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.128832 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.129377 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.130111 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.130146 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.130334 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.130437 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.132906 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.133171 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.134951 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.135362 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.135609 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.135612 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.136140 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.136344 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.137343 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.138696 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.138777 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.139264 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.139281 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.139663 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.140513 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.140534 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.140525 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.140546 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.140632 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:03.640615584 +0000 UTC m=+21.163691448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.140760 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.141104 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.142875 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.143735 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.147836 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.147864 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.147876 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.147948 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:03.647930472 +0000 UTC m=+21.171006326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.149344 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.150217 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.151027 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.151654 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.153043 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.153162 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.153217 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.153522 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.153523 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.153581 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.154221 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.154335 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.154453 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.154921 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.154840 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.156200 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.158232 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.159947 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.162790 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.163002 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.163981 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.166074 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.167002 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.168922 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.169509 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.170781 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.172096 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.172509 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.172674 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.173322 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.173946 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.175076 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.175657 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.176763 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.177216 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.177779 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.179257 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.179907 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.179998 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.181253 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.181937 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.182456 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.183726 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.184329 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.184503 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.185416 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.185978 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.186993 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.187512 4820 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.187844 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.190045 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.190544 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.191066 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.191807 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.192841 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.194086 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.194842 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.195944 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.196832 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.197984 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.198720 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.199986 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.200783 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.202061 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.202626 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.203673 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.203691 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.204594 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.205692 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.206338 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.207327 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.207941 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.208570 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.209558 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.213771 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.213883 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.213957 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214082 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214125 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214149 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214292 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214349 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214409 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214480 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214503 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214524 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214539 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214555 4820 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214566 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214577 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214587 4820 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214599 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214643 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214664 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214676 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214685 4820 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214710 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214721 4820 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214731 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214739 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214749 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214757 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214788 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214798 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214826 4820 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214872 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214883 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214912 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214921 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214931 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214940 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214949 4820 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214959 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214984 4820 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.214993 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215002 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215010 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215019 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215028 4820 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215037 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215061 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215072 4820 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215081 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215090 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215098 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215106 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215114 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215139 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215148 4820 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215157 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215168 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215177 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215186 4820 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215194 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215219 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215229 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215243 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215255 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215266 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215311 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215323 4820 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215332 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215341 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215349 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215357 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215383 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215391 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215400 4820 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215409 4820 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215423 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215463 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215473 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215484 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215493 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215500 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215508 4820 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215518 4820 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215544 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215553 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215562 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215570 4820 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215578 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215585 4820 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215594 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215619 4820 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215628 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215635 4820 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215644 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215652 4820 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215660 4820 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215667 4820 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215676 4820 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215700 4820 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215709 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215717 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215724 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215732 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215740 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215748 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215758 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215782 4820 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215790 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215799 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215811 4820 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215824 4820 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215835 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215865 4820 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215876 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215912 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215928 4820 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215944 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215955 4820 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.215967 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216005 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216016 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216024 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216033 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216041 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216049 4820 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216057 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216083 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216092 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216101 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216117 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216125 4820 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216133 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216158 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216167 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216176 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216185 4820 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216194 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216202 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216211 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216239 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216248 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216257 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216266 4820 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216274 4820 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.216282 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.222139 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.235133 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.242282 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.246427 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.246847 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.249262 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37" exitCode=255 Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.249300 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37"} Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.249371 4820 scope.go:117] "RemoveContainer" containerID="530367677cc447e7f7895ffa6509c296dfaac7c630a2e8471b8cce3e6b0baee8" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.249734 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.260522 4820 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.263186 4820 scope.go:117] "RemoveContainer" containerID="38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37" Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.263351 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.263732 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.264484 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.276181 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.292417 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.303158 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.311427 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.341091 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.354756 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.365500 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.376657 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.385567 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.394803 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.398347 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.407768 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://530367677cc447e7f7895ffa6509c296dfaac7c630a2e8471b8cce3e6b0baee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:04:57Z\\\",\\\"message\\\":\\\"W0203 12:04:46.135542 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:04:46.135981 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120286 cert, and key in /tmp/serving-cert-2841571088/serving-signer.crt, /tmp/serving-cert-2841571088/serving-signer.key\\\\nI0203 12:04:46.578551 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:04:46.581182 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:04:46.583483 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:04:46.586106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2841571088/tls.crt::/tmp/serving-cert-2841571088/tls.key\\\\\\\"\\\\nF0203 12:04:57.040532 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:03 crc kubenswrapper[4820]: W0203 12:05:03.408821 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-519264de9b4c2adf323a918a6c014cfbaae088f34e8996f5b7b352613cf550b9 WatchSource:0}: Error finding container 519264de9b4c2adf323a918a6c014cfbaae088f34e8996f5b7b352613cf550b9: Status 404 returned error can't find the container with id 519264de9b4c2adf323a918a6c014cfbaae088f34e8996f5b7b352613cf550b9 Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.411221 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:03 crc kubenswrapper[4820]: W0203 12:05:03.425172 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-7e9ba22aab290e3583c30dc57a25362a667812695978c18cff8b0dc4d9603daa WatchSource:0}: Error finding container 7e9ba22aab290e3583c30dc57a25362a667812695978c18cff8b0dc4d9603daa: Status 404 returned error can't find the container with id 7e9ba22aab290e3583c30dc57a25362a667812695978c18cff8b0dc4d9603daa Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.428559 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:03 crc kubenswrapper[4820]: W0203 12:05:03.441316 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-3c831fdd53641fd248f91333b638946d67f888dd899457b32eea03d245a035e4 WatchSource:0}: Error finding container 3c831fdd53641fd248f91333b638946d67f888dd899457b32eea03d245a035e4: Status 404 returned error can't find the container with id 3c831fdd53641fd248f91333b638946d67f888dd899457b32eea03d245a035e4 Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.588761 4820 csr.go:261] certificate signing request csr-nq5kt is approved, waiting to be issued Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.607025 4820 csr.go:257] certificate signing request csr-nq5kt is issued Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.620307 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.620527 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:05:04.620497111 +0000 UTC m=+22.143572985 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.721032 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.721082 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.721103 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:03 crc kubenswrapper[4820]: I0203 12:05:03.721124 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.721214 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.721271 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:04.721254254 +0000 UTC m=+22.244330118 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.721644 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.721668 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.721663 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.721761 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.721805 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.721818 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.721774 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:04.721750215 +0000 UTC m=+22.244826119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.721871 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:04.721859838 +0000 UTC m=+22.244935772 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.721682 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:03 crc kubenswrapper[4820]: E0203 12:05:03.721916 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:04.721908369 +0000 UTC m=+22.244984333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.099580 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:06:16.054957125 +0000 UTC Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.252815 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7e9ba22aab290e3583c30dc57a25362a667812695978c18cff8b0dc4d9603daa"} Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.254343 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137"} Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.254397 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"519264de9b4c2adf323a918a6c014cfbaae088f34e8996f5b7b352613cf550b9"} Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.256409 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.259004 4820 scope.go:117] "RemoveContainer" containerID="38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37" Feb 03 12:05:04 crc kubenswrapper[4820]: E0203 12:05:04.259168 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.263306 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3"} Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.263346 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8"} Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.263358 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3c831fdd53641fd248f91333b638946d67f888dd899457b32eea03d245a035e4"} Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.311753 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.335707 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.355044 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://530367677cc447e7f7895ffa6509c296dfaac7c630a2e8471b8cce3e6b0baee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:04:57Z\\\",\\\"message\\\":\\\"W0203 12:04:46.135542 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:04:46.135981 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120286 cert, and key in /tmp/serving-cert-2841571088/serving-signer.crt, /tmp/serving-cert-2841571088/serving-signer.key\\\\nI0203 12:04:46.578551 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:04:46.581182 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:04:46.583483 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:04:46.586106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2841571088/tls.crt::/tmp/serving-cert-2841571088/tls.key\\\\\\\"\\\\nF0203 12:04:57.040532 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.370137 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.388338 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.412129 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.432622 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.469216 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.478349 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-p5mx8"] Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.478981 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p5mx8" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.480585 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.481736 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.483769 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.496060 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.513662 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.529002 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.529422 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fe0bc53e-6abb-4194-ae3d-109a4fd80372-hosts-file\") pod \"node-resolver-p5mx8\" (UID: \"fe0bc53e-6abb-4194-ae3d-109a4fd80372\") " pod="openshift-dns/node-resolver-p5mx8" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.529508 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrchn\" (UniqueName: \"kubernetes.io/projected/fe0bc53e-6abb-4194-ae3d-109a4fd80372-kube-api-access-xrchn\") pod \"node-resolver-p5mx8\" (UID: \"fe0bc53e-6abb-4194-ae3d-109a4fd80372\") " pod="openshift-dns/node-resolver-p5mx8" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.543138 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.554730 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.567862 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.583515 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.597667 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.608225 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-03 12:00:03 +0000 UTC, rotation deadline is 2026-11-05 14:54:48.320913299 +0000 UTC Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.608267 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6602h49m43.712649992s for next certificate rotation Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.619017 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.630111 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.630214 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fe0bc53e-6abb-4194-ae3d-109a4fd80372-hosts-file\") pod \"node-resolver-p5mx8\" (UID: \"fe0bc53e-6abb-4194-ae3d-109a4fd80372\") " pod="openshift-dns/node-resolver-p5mx8" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.630300 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fe0bc53e-6abb-4194-ae3d-109a4fd80372-hosts-file\") pod \"node-resolver-p5mx8\" (UID: \"fe0bc53e-6abb-4194-ae3d-109a4fd80372\") " pod="openshift-dns/node-resolver-p5mx8" Feb 03 12:05:04 crc kubenswrapper[4820]: E0203 12:05:04.630304 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:05:06.630277343 +0000 UTC m=+24.153353207 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.630376 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrchn\" (UniqueName: \"kubernetes.io/projected/fe0bc53e-6abb-4194-ae3d-109a4fd80372-kube-api-access-xrchn\") pod \"node-resolver-p5mx8\" (UID: \"fe0bc53e-6abb-4194-ae3d-109a4fd80372\") " pod="openshift-dns/node-resolver-p5mx8" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.630826 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.643270 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.648193 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrchn\" (UniqueName: \"kubernetes.io/projected/fe0bc53e-6abb-4194-ae3d-109a4fd80372-kube-api-access-xrchn\") pod \"node-resolver-p5mx8\" (UID: \"fe0bc53e-6abb-4194-ae3d-109a4fd80372\") " pod="openshift-dns/node-resolver-p5mx8" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.657359 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.671441 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.689833 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.702153 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.714930 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.725001 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.731245 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.731301 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.731326 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.731344 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:04 crc kubenswrapper[4820]: E0203 12:05:04.731410 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:04 crc kubenswrapper[4820]: E0203 12:05:04.731428 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:04 crc kubenswrapper[4820]: E0203 12:05:04.731465 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:06.731451455 +0000 UTC m=+24.254527319 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:04 crc kubenswrapper[4820]: E0203 12:05:04.731484 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:06.731477906 +0000 UTC m=+24.254553770 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:04 crc kubenswrapper[4820]: E0203 12:05:04.731483 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:04 crc kubenswrapper[4820]: E0203 12:05:04.731511 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:04 crc kubenswrapper[4820]: E0203 12:05:04.731506 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:04 crc kubenswrapper[4820]: E0203 12:05:04.731543 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:04 crc kubenswrapper[4820]: E0203 12:05:04.731554 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:04 crc kubenswrapper[4820]: E0203 12:05:04.731603 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:06.731588338 +0000 UTC m=+24.254664202 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:04 crc kubenswrapper[4820]: E0203 12:05:04.731523 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:04 crc kubenswrapper[4820]: E0203 12:05:04.731640 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:06.73163499 +0000 UTC m=+24.254710844 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.789660 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p5mx8" Feb 03 12:05:04 crc kubenswrapper[4820]: W0203 12:05:04.807590 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe0bc53e_6abb_4194_ae3d_109a4fd80372.slice/crio-163a1f7f7cbd9ac79315c68a21caa22ad3d7d6d4b9aa68b704811f7cf8fd6c87 WatchSource:0}: Error finding container 163a1f7f7cbd9ac79315c68a21caa22ad3d7d6d4b9aa68b704811f7cf8fd6c87: Status 404 returned error can't find the container with id 163a1f7f7cbd9ac79315c68a21caa22ad3d7d6d4b9aa68b704811f7cf8fd6c87 Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.870448 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qj7xr"] Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.870752 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.878272 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.878321 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.878535 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.878738 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.878881 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.899266 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.914012 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.925729 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.932359 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c02def6-29f2-448e-80ec-0c8ee290f053-proxy-tls\") pod \"machine-config-daemon-qj7xr\" (UID: \"2c02def6-29f2-448e-80ec-0c8ee290f053\") " pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.932400 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xjx2\" (UniqueName: \"kubernetes.io/projected/2c02def6-29f2-448e-80ec-0c8ee290f053-kube-api-access-8xjx2\") pod \"machine-config-daemon-qj7xr\" (UID: \"2c02def6-29f2-448e-80ec-0c8ee290f053\") " pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.932426 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c02def6-29f2-448e-80ec-0c8ee290f053-mcd-auth-proxy-config\") pod \"machine-config-daemon-qj7xr\" (UID: \"2c02def6-29f2-448e-80ec-0c8ee290f053\") " pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.932456 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c02def6-29f2-448e-80ec-0c8ee290f053-rootfs\") pod \"machine-config-daemon-qj7xr\" (UID: \"2c02def6-29f2-448e-80ec-0c8ee290f053\") " pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.942024 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.954971 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.968062 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.982366 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:04 crc kubenswrapper[4820]: I0203 12:05:04.996765 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.016782 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.028255 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.033509 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c02def6-29f2-448e-80ec-0c8ee290f053-rootfs\") pod \"machine-config-daemon-qj7xr\" (UID: \"2c02def6-29f2-448e-80ec-0c8ee290f053\") " pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.033558 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c02def6-29f2-448e-80ec-0c8ee290f053-proxy-tls\") pod \"machine-config-daemon-qj7xr\" (UID: \"2c02def6-29f2-448e-80ec-0c8ee290f053\") " pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.033627 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xjx2\" (UniqueName: \"kubernetes.io/projected/2c02def6-29f2-448e-80ec-0c8ee290f053-kube-api-access-8xjx2\") pod \"machine-config-daemon-qj7xr\" (UID: \"2c02def6-29f2-448e-80ec-0c8ee290f053\") " pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.033669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c02def6-29f2-448e-80ec-0c8ee290f053-mcd-auth-proxy-config\") pod \"machine-config-daemon-qj7xr\" (UID: \"2c02def6-29f2-448e-80ec-0c8ee290f053\") " pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.033700 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c02def6-29f2-448e-80ec-0c8ee290f053-rootfs\") pod \"machine-config-daemon-qj7xr\" (UID: \"2c02def6-29f2-448e-80ec-0c8ee290f053\") " pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.034363 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c02def6-29f2-448e-80ec-0c8ee290f053-mcd-auth-proxy-config\") pod \"machine-config-daemon-qj7xr\" (UID: \"2c02def6-29f2-448e-80ec-0c8ee290f053\") " pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.037484 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c02def6-29f2-448e-80ec-0c8ee290f053-proxy-tls\") pod \"machine-config-daemon-qj7xr\" (UID: \"2c02def6-29f2-448e-80ec-0c8ee290f053\") " pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.053912 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xjx2\" (UniqueName: \"kubernetes.io/projected/2c02def6-29f2-448e-80ec-0c8ee290f053-kube-api-access-8xjx2\") pod \"machine-config-daemon-qj7xr\" (UID: \"2c02def6-29f2-448e-80ec-0c8ee290f053\") " pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.099870 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:12:50.730569644 +0000 UTC Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.141948 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.141982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.141948 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:05 crc kubenswrapper[4820]: E0203 12:05:05.142067 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:05 crc kubenswrapper[4820]: E0203 12:05:05.142135 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:05 crc kubenswrapper[4820]: E0203 12:05:05.142207 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.152238 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.184996 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:05:05 crc kubenswrapper[4820]: W0203 12:05:05.195534 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c02def6_29f2_448e_80ec_0c8ee290f053.slice/crio-95752c7e22543054feaa909e25738409895c7347c38db63cb025a1e4a7bdea35 WatchSource:0}: Error finding container 95752c7e22543054feaa909e25738409895c7347c38db63cb025a1e4a7bdea35: Status 404 returned error can't find the container with id 95752c7e22543054feaa909e25738409895c7347c38db63cb025a1e4a7bdea35 Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.234706 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-dkfwm"] Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.235063 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-b5qz9"] Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.235424 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.235818 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-75mwm"] Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.236224 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.236677 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.239062 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.239137 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.239259 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.239272 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.239296 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.239272 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.239604 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.239653 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.239682 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.239696 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.239917 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.239965 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.240063 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.242934 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.261033 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.266910 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"95752c7e22543054feaa909e25738409895c7347c38db63cb025a1e4a7bdea35"} Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.267852 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p5mx8" event={"ID":"fe0bc53e-6abb-4194-ae3d-109a4fd80372","Type":"ContainerStarted","Data":"9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3"} Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.267871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p5mx8" event={"ID":"fe0bc53e-6abb-4194-ae3d-109a4fd80372","Type":"ContainerStarted","Data":"163a1f7f7cbd9ac79315c68a21caa22ad3d7d6d4b9aa68b704811f7cf8fd6c87"} Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.275381 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.293236 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.307670 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.319280 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336210 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-slash\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336255 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-var-lib-cni-bin\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336277 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-multus-conf-dir\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336395 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-kubelet\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336436 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovnkube-config\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336468 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-openvswitch\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336510 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-run-multus-certs\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336536 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-os-release\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336551 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-run-k8s-cni-cncf-io\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336568 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-var-lib-openvswitch\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336584 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk788\" (UniqueName: \"kubernetes.io/projected/cf99e305-aa5b-4171-94f6-1e64f20414dd-kube-api-access-nk788\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336600 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c6da6dd5-2847-482b-adc1-d82ead0af3e9-multus-daemon-config\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336617 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d93ec7bc-4029-44a4-894d-03eff1388683-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336633 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-systemd-units\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336646 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-env-overrides\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336661 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-var-lib-kubelet\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336701 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d93ec7bc-4029-44a4-894d-03eff1388683-os-release\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336719 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d93ec7bc-4029-44a4-894d-03eff1388683-cni-binary-copy\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336740 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-cnibin\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336754 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-multus-socket-dir-parent\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336769 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-etc-kubernetes\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336787 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d93ec7bc-4029-44a4-894d-03eff1388683-system-cni-dir\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336815 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-etc-openvswitch\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336835 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-log-socket\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336854 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-run-netns\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336879 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6z8l\" (UniqueName: \"kubernetes.io/projected/d93ec7bc-4029-44a4-894d-03eff1388683-kube-api-access-p6z8l\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336920 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-cni-bin\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336935 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-systemd\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336953 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-run-ovn-kubernetes\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336969 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-multus-cni-dir\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336982 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d93ec7bc-4029-44a4-894d-03eff1388683-cnibin\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.336998 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d93ec7bc-4029-44a4-894d-03eff1388683-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.337015 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovnkube-script-lib\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.337031 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.337048 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-run-netns\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.337063 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-node-log\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.337078 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovn-node-metrics-cert\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.337091 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c6da6dd5-2847-482b-adc1-d82ead0af3e9-cni-binary-copy\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.337115 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-cni-netd\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.337130 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpcc5\" (UniqueName: \"kubernetes.io/projected/c6da6dd5-2847-482b-adc1-d82ead0af3e9-kube-api-access-hpcc5\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.337149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-ovn\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.337163 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-system-cni-dir\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.337177 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-var-lib-cni-multus\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.337192 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-hostroot\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.343232 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.358273 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.371076 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.383764 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.395855 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.412371 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.437336 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.437943 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-run-ovn-kubernetes\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.437979 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-multus-cni-dir\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.437995 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d93ec7bc-4029-44a4-894d-03eff1388683-cnibin\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438011 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d93ec7bc-4029-44a4-894d-03eff1388683-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438033 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438048 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovnkube-script-lib\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438062 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-node-log\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438078 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovn-node-metrics-cert\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438095 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c6da6dd5-2847-482b-adc1-d82ead0af3e9-cni-binary-copy\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438112 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-run-netns\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438127 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-cni-netd\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438154 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpcc5\" (UniqueName: \"kubernetes.io/projected/c6da6dd5-2847-482b-adc1-d82ead0af3e9-kube-api-access-hpcc5\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438172 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-ovn\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438192 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-system-cni-dir\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438209 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-var-lib-cni-multus\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438226 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-hostroot\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438243 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-var-lib-cni-bin\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438261 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-multus-conf-dir\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438286 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-slash\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438301 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovnkube-config\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438324 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-kubelet\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438340 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-run-multus-certs\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438359 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-openvswitch\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438380 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-run-k8s-cni-cncf-io\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438400 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-os-release\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438416 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c6da6dd5-2847-482b-adc1-d82ead0af3e9-multus-daemon-config\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438431 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d93ec7bc-4029-44a4-894d-03eff1388683-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438447 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-systemd-units\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438463 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-var-lib-openvswitch\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438485 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk788\" (UniqueName: \"kubernetes.io/projected/cf99e305-aa5b-4171-94f6-1e64f20414dd-kube-api-access-nk788\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438516 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d93ec7bc-4029-44a4-894d-03eff1388683-os-release\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438538 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d93ec7bc-4029-44a4-894d-03eff1388683-cni-binary-copy\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438558 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-env-overrides\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438570 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-run-ovn-kubernetes\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438578 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-var-lib-kubelet\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438603 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-cnibin\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438625 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-multus-socket-dir-parent\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438644 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-etc-kubernetes\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438664 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d93ec7bc-4029-44a4-894d-03eff1388683-system-cni-dir\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438696 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-etc-openvswitch\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438708 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-multus-cni-dir\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438721 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-log-socket\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438736 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d93ec7bc-4029-44a4-894d-03eff1388683-cnibin\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438747 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-cni-bin\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438763 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-run-netns\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438779 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6z8l\" (UniqueName: \"kubernetes.io/projected/d93ec7bc-4029-44a4-894d-03eff1388683-kube-api-access-p6z8l\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438797 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-systemd\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.438857 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-systemd\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439342 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d93ec7bc-4029-44a4-894d-03eff1388683-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439384 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439477 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-systemd-units\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439564 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-var-lib-openvswitch\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439591 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c6da6dd5-2847-482b-adc1-d82ead0af3e9-cni-binary-copy\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439655 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-multus-socket-dir-parent\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439666 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-var-lib-cni-multus\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439697 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-etc-kubernetes\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439705 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-var-lib-cni-bin\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439669 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-multus-conf-dir\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439718 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-hostroot\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439730 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d93ec7bc-4029-44a4-894d-03eff1388683-system-cni-dir\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439744 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-run-multus-certs\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439732 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-node-log\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439768 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-cni-netd\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439777 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-slash\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439809 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-run-netns\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-system-cni-dir\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439844 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-log-socket\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439856 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-cni-bin\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439812 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-etc-openvswitch\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439851 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-run-netns\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439951 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-run-k8s-cni-cncf-io\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.439994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-openvswitch\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.440108 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-host-var-lib-kubelet\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.440162 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d93ec7bc-4029-44a4-894d-03eff1388683-os-release\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.440169 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovnkube-config\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.440190 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-kubelet\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.440214 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-cnibin\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.440226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-ovn\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.440208 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c6da6dd5-2847-482b-adc1-d82ead0af3e9-os-release\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.440597 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovnkube-script-lib\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.440706 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c6da6dd5-2847-482b-adc1-d82ead0af3e9-multus-daemon-config\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.440723 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-env-overrides\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.441207 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d93ec7bc-4029-44a4-894d-03eff1388683-cni-binary-copy\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.441728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d93ec7bc-4029-44a4-894d-03eff1388683-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.445986 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovn-node-metrics-cert\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.458405 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6z8l\" (UniqueName: \"kubernetes.io/projected/d93ec7bc-4029-44a4-894d-03eff1388683-kube-api-access-p6z8l\") pod \"multus-additional-cni-plugins-b5qz9\" (UID: \"d93ec7bc-4029-44a4-894d-03eff1388683\") " pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.460770 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpcc5\" (UniqueName: \"kubernetes.io/projected/c6da6dd5-2847-482b-adc1-d82ead0af3e9-kube-api-access-hpcc5\") pod \"multus-dkfwm\" (UID: \"c6da6dd5-2847-482b-adc1-d82ead0af3e9\") " pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.468098 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk788\" (UniqueName: \"kubernetes.io/projected/cf99e305-aa5b-4171-94f6-1e64f20414dd-kube-api-access-nk788\") pod \"ovnkube-node-75mwm\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.497087 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.522974 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.537744 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.550787 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.560369 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dkfwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.565921 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: W0203 12:05:05.570104 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6da6dd5_2847_482b_adc1_d82ead0af3e9.slice/crio-c749b689925dcb117ddf6d77fb9153e92c0f51f610740538a28762f5d84dfdb9 WatchSource:0}: Error finding container c749b689925dcb117ddf6d77fb9153e92c0f51f610740538a28762f5d84dfdb9: Status 404 returned error can't find the container with id c749b689925dcb117ddf6d77fb9153e92c0f51f610740538a28762f5d84dfdb9 Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.580322 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.580620 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.586724 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.595969 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: W0203 12:05:05.608611 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd93ec7bc_4029_44a4_894d_03eff1388683.slice/crio-823cbae1056e670bb6a684afaa13990fb5b12d496ec449d796c6f958257e916d WatchSource:0}: Error finding container 823cbae1056e670bb6a684afaa13990fb5b12d496ec449d796c6f958257e916d: Status 404 returned error can't find the container with id 823cbae1056e670bb6a684afaa13990fb5b12d496ec449d796c6f958257e916d Feb 03 12:05:05 crc kubenswrapper[4820]: W0203 12:05:05.609134 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf99e305_aa5b_4171_94f6_1e64f20414dd.slice/crio-56dd112bfcb1d6294c5e1231ef6bf898cf47881a19f6db9df36e8d4bb8cb4bd0 WatchSource:0}: Error finding container 56dd112bfcb1d6294c5e1231ef6bf898cf47881a19f6db9df36e8d4bb8cb4bd0: Status 404 returned error can't find the container with id 56dd112bfcb1d6294c5e1231ef6bf898cf47881a19f6db9df36e8d4bb8cb4bd0 Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.610067 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.630027 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.640985 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.653174 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:05 crc kubenswrapper[4820]: I0203 12:05:05.671931 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.100946 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 10:03:15.558899368 +0000 UTC Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.107280 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.110677 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.114482 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.127490 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.139796 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.152981 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.166338 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.178187 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.189303 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.203417 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.219196 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.232988 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.244236 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.257665 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.267463 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.271968 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkfwm" event={"ID":"c6da6dd5-2847-482b-adc1-d82ead0af3e9","Type":"ContainerStarted","Data":"b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152"} Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.272008 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkfwm" event={"ID":"c6da6dd5-2847-482b-adc1-d82ead0af3e9","Type":"ContainerStarted","Data":"c749b689925dcb117ddf6d77fb9153e92c0f51f610740538a28762f5d84dfdb9"} Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.273713 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e"} Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.275261 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f"} Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.275287 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d"} Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.276542 4820 generic.go:334] "Generic (PLEG): container finished" podID="d93ec7bc-4029-44a4-894d-03eff1388683" containerID="b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002" exitCode=0 Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.276607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" event={"ID":"d93ec7bc-4029-44a4-894d-03eff1388683","Type":"ContainerDied","Data":"b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002"} Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.276640 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" event={"ID":"d93ec7bc-4029-44a4-894d-03eff1388683","Type":"ContainerStarted","Data":"823cbae1056e670bb6a684afaa13990fb5b12d496ec449d796c6f958257e916d"} Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.278407 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerID="707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3" exitCode=0 Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.278474 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerDied","Data":"707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3"} Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.278508 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerStarted","Data":"56dd112bfcb1d6294c5e1231ef6bf898cf47881a19f6db9df36e8d4bb8cb4bd0"} Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.290060 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.310594 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.323376 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.336429 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.349082 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.358855 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.371990 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.398582 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.415028 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.435643 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.447668 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.460160 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.472756 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.485535 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.497434 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:06Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.651517 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:06 crc kubenswrapper[4820]: E0203 12:05:06.651685 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:05:10.651654398 +0000 UTC m=+28.174730262 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.752643 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.752689 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.752722 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:06 crc kubenswrapper[4820]: I0203 12:05:06.752762 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:06 crc kubenswrapper[4820]: E0203 12:05:06.752842 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:06 crc kubenswrapper[4820]: E0203 12:05:06.752864 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:06 crc kubenswrapper[4820]: E0203 12:05:06.752929 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:10.752911182 +0000 UTC m=+28.275987046 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:06 crc kubenswrapper[4820]: E0203 12:05:06.752948 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:10.752939073 +0000 UTC m=+28.276014937 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:06 crc kubenswrapper[4820]: E0203 12:05:06.753048 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:06 crc kubenswrapper[4820]: E0203 12:05:06.753087 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:06 crc kubenswrapper[4820]: E0203 12:05:06.753106 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:06 crc kubenswrapper[4820]: E0203 12:05:06.753120 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:06 crc kubenswrapper[4820]: E0203 12:05:06.753135 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:06 crc kubenswrapper[4820]: E0203 12:05:06.753136 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:06 crc kubenswrapper[4820]: E0203 12:05:06.753240 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:10.753207119 +0000 UTC m=+28.276283153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:06 crc kubenswrapper[4820]: E0203 12:05:06.753351 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:10.753327292 +0000 UTC m=+28.276403286 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.101747 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 20:29:17.085434923 +0000 UTC Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.142331 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.142369 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:07 crc kubenswrapper[4820]: E0203 12:05:07.142432 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.142499 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:07 crc kubenswrapper[4820]: E0203 12:05:07.142649 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:07 crc kubenswrapper[4820]: E0203 12:05:07.142503 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.285216 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerStarted","Data":"4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec"} Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.285518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerStarted","Data":"0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f"} Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.285529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerStarted","Data":"f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c"} Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.285539 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerStarted","Data":"c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76"} Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.285548 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerStarted","Data":"7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4"} Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.285556 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerStarted","Data":"b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7"} Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.287327 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" event={"ID":"d93ec7bc-4029-44a4-894d-03eff1388683","Type":"ContainerStarted","Data":"e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700"} Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.308117 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:07Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.319966 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:07Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.332585 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:07Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.345633 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:07Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.356150 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:07Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.366322 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:07Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.377387 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:07Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.390372 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:07Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.407353 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:07Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.420298 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:07Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.430447 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:07Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.441565 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:07Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.459849 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:07Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:07 crc kubenswrapper[4820]: I0203 12:05:07.476526 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:07Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.102410 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 02:30:01.018575657 +0000 UTC Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.292031 4820 generic.go:334] "Generic (PLEG): container finished" podID="d93ec7bc-4029-44a4-894d-03eff1388683" containerID="e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700" exitCode=0 Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.292072 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" event={"ID":"d93ec7bc-4029-44a4-894d-03eff1388683","Type":"ContainerDied","Data":"e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700"} Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.312750 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.327722 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.341166 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.356251 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.369838 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.382931 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.393421 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.406601 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.420975 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.435010 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.447284 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.461138 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.474068 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.497105 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.544467 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.545179 4820 scope.go:117] "RemoveContainer" containerID="38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37" Feb 03 12:05:08 crc kubenswrapper[4820]: E0203 12:05:08.545552 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.600162 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.601834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.601874 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.601912 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.602011 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.607301 4820 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.607486 4820 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.608345 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.608377 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.608389 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.608405 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.608416 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:08Z","lastTransitionTime":"2026-02-03T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:08 crc kubenswrapper[4820]: E0203 12:05:08.622983 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.625668 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.625691 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.625700 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.625714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.625723 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:08Z","lastTransitionTime":"2026-02-03T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:08 crc kubenswrapper[4820]: E0203 12:05:08.638626 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.641766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.641786 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.641794 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.641808 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.641817 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:08Z","lastTransitionTime":"2026-02-03T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:08 crc kubenswrapper[4820]: E0203 12:05:08.658684 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.662015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.662082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.662102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.662126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.662144 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:08Z","lastTransitionTime":"2026-02-03T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:08 crc kubenswrapper[4820]: E0203 12:05:08.678571 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.681670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.681696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.681704 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.681718 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.681728 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:08Z","lastTransitionTime":"2026-02-03T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:08 crc kubenswrapper[4820]: E0203 12:05:08.692725 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:08 crc kubenswrapper[4820]: E0203 12:05:08.692914 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.695185 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.695224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.695237 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.695298 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.695311 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:08Z","lastTransitionTime":"2026-02-03T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.798277 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.798608 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.798620 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.798633 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.798641 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:08Z","lastTransitionTime":"2026-02-03T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.900727 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.900779 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.900788 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.900803 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:08 crc kubenswrapper[4820]: I0203 12:05:08.900817 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:08Z","lastTransitionTime":"2026-02-03T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.003755 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.003791 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.003800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.003816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.003826 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:09Z","lastTransitionTime":"2026-02-03T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.102914 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:49:15.28255209 +0000 UTC Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.105591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.105617 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.105624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.105636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.105645 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:09Z","lastTransitionTime":"2026-02-03T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.142150 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.142260 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:09 crc kubenswrapper[4820]: E0203 12:05:09.142315 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.142355 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:09 crc kubenswrapper[4820]: E0203 12:05:09.142490 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:09 crc kubenswrapper[4820]: E0203 12:05:09.142691 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.208102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.208143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.208157 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.208180 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.208203 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:09Z","lastTransitionTime":"2026-02-03T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.235255 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-z8xrk"] Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.235706 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-z8xrk" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.237717 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.237934 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.238147 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.239533 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.250192 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.262985 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.276244 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.277428 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1cca669-281b-4756-8da8-3860684d3410-host\") pod \"node-ca-z8xrk\" (UID: \"c1cca669-281b-4756-8da8-3860684d3410\") " pod="openshift-image-registry/node-ca-z8xrk" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.277461 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxwqh\" (UniqueName: \"kubernetes.io/projected/c1cca669-281b-4756-8da8-3860684d3410-kube-api-access-dxwqh\") pod \"node-ca-z8xrk\" (UID: \"c1cca669-281b-4756-8da8-3860684d3410\") " pod="openshift-image-registry/node-ca-z8xrk" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.277483 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c1cca669-281b-4756-8da8-3860684d3410-serviceca\") pod \"node-ca-z8xrk\" (UID: \"c1cca669-281b-4756-8da8-3860684d3410\") " pod="openshift-image-registry/node-ca-z8xrk" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.289611 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.297648 4820 generic.go:334] "Generic (PLEG): container finished" podID="d93ec7bc-4029-44a4-894d-03eff1388683" containerID="7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d" exitCode=0 Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.297734 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" event={"ID":"d93ec7bc-4029-44a4-894d-03eff1388683","Type":"ContainerDied","Data":"7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d"} Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.304976 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.310775 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.310833 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.310844 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.310863 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.310878 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:09Z","lastTransitionTime":"2026-02-03T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.319031 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.329971 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.350232 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.363390 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.377937 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c1cca669-281b-4756-8da8-3860684d3410-serviceca\") pod \"node-ca-z8xrk\" (UID: \"c1cca669-281b-4756-8da8-3860684d3410\") " pod="openshift-image-registry/node-ca-z8xrk" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.378061 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1cca669-281b-4756-8da8-3860684d3410-host\") pod \"node-ca-z8xrk\" (UID: \"c1cca669-281b-4756-8da8-3860684d3410\") " pod="openshift-image-registry/node-ca-z8xrk" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.378095 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxwqh\" (UniqueName: \"kubernetes.io/projected/c1cca669-281b-4756-8da8-3860684d3410-kube-api-access-dxwqh\") pod \"node-ca-z8xrk\" (UID: \"c1cca669-281b-4756-8da8-3860684d3410\") " pod="openshift-image-registry/node-ca-z8xrk" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.378985 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1cca669-281b-4756-8da8-3860684d3410-host\") pod \"node-ca-z8xrk\" (UID: \"c1cca669-281b-4756-8da8-3860684d3410\") " pod="openshift-image-registry/node-ca-z8xrk" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.379783 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c1cca669-281b-4756-8da8-3860684d3410-serviceca\") pod \"node-ca-z8xrk\" (UID: \"c1cca669-281b-4756-8da8-3860684d3410\") " pod="openshift-image-registry/node-ca-z8xrk" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.384043 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.394781 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.395483 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxwqh\" (UniqueName: \"kubernetes.io/projected/c1cca669-281b-4756-8da8-3860684d3410-kube-api-access-dxwqh\") pod \"node-ca-z8xrk\" (UID: \"c1cca669-281b-4756-8da8-3860684d3410\") " pod="openshift-image-registry/node-ca-z8xrk" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.408994 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.412655 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.412702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.412714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.412733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.412746 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:09Z","lastTransitionTime":"2026-02-03T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.425108 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.438909 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.452665 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.471431 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.483469 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.497615 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.510422 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.515109 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.515146 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.515157 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.515172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.515183 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:09Z","lastTransitionTime":"2026-02-03T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.523335 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.538128 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.550195 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.552028 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-z8xrk" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.563562 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: W0203 12:05:09.570724 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1cca669_281b_4756_8da8_3860684d3410.slice/crio-d9e42d4231768cf2b4e59953cb76b144f21423f7a4615eaff79bad6d69edcb3b WatchSource:0}: Error finding container d9e42d4231768cf2b4e59953cb76b144f21423f7a4615eaff79bad6d69edcb3b: Status 404 returned error can't find the container with id d9e42d4231768cf2b4e59953cb76b144f21423f7a4615eaff79bad6d69edcb3b Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.577056 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.588818 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.600275 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.615171 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.617014 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.617051 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.617060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.617075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.617086 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:09Z","lastTransitionTime":"2026-02-03T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.629155 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.645961 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.656775 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:09Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.719588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.719623 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.719633 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.719646 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.719656 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:09Z","lastTransitionTime":"2026-02-03T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.822186 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.822220 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.822233 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.822248 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.822259 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:09Z","lastTransitionTime":"2026-02-03T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.924524 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.924566 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.924576 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.924594 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:09 crc kubenswrapper[4820]: I0203 12:05:09.924605 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:09Z","lastTransitionTime":"2026-02-03T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.027659 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.028021 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.028034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.028057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.028070 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:10Z","lastTransitionTime":"2026-02-03T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.103474 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:52:20.849454113 +0000 UTC Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.130568 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.130617 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.130628 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.130644 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.130657 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:10Z","lastTransitionTime":"2026-02-03T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.233775 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.233815 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.233827 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.233842 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.233853 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:10Z","lastTransitionTime":"2026-02-03T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.302357 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-z8xrk" event={"ID":"c1cca669-281b-4756-8da8-3860684d3410","Type":"ContainerStarted","Data":"09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22"} Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.302401 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-z8xrk" event={"ID":"c1cca669-281b-4756-8da8-3860684d3410","Type":"ContainerStarted","Data":"d9e42d4231768cf2b4e59953cb76b144f21423f7a4615eaff79bad6d69edcb3b"} Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.319468 4820 generic.go:334] "Generic (PLEG): container finished" podID="d93ec7bc-4029-44a4-894d-03eff1388683" containerID="ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088" exitCode=0 Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.319630 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" event={"ID":"d93ec7bc-4029-44a4-894d-03eff1388683","Type":"ContainerDied","Data":"ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088"} Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.320374 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.330918 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerStarted","Data":"4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f"} Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.335609 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.335932 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.335961 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.335972 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.336018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.336044 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:10Z","lastTransitionTime":"2026-02-03T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.349445 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.359645 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.372695 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.383849 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.395245 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.418116 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.429588 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.448212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.448247 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.448257 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.448273 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.448284 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:10Z","lastTransitionTime":"2026-02-03T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.451111 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.462398 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.473113 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.485767 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.497284 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.507347 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.524250 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.536153 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.547611 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.550148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.550183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.550195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.550211 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.550252 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:10Z","lastTransitionTime":"2026-02-03T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.566281 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.577371 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.587366 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.597346 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.607933 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.619121 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.630471 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.642517 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.652633 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.652664 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.652673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.652686 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.652695 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:10Z","lastTransitionTime":"2026-02-03T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.654260 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.670352 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.686662 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.689173 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:10 crc kubenswrapper[4820]: E0203 12:05:10.689415 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:05:18.689385623 +0000 UTC m=+36.212461487 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.695558 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.755817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.755870 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.755919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.755951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.755976 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:10Z","lastTransitionTime":"2026-02-03T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.790035 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.790118 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.790176 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.790212 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:10 crc kubenswrapper[4820]: E0203 12:05:10.790241 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:10 crc kubenswrapper[4820]: E0203 12:05:10.790319 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:10 crc kubenswrapper[4820]: E0203 12:05:10.790334 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:10 crc kubenswrapper[4820]: E0203 12:05:10.790352 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:10 crc kubenswrapper[4820]: E0203 12:05:10.790364 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:10 crc kubenswrapper[4820]: E0203 12:05:10.790325 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:18.79030614 +0000 UTC m=+36.313382024 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:10 crc kubenswrapper[4820]: E0203 12:05:10.790409 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:18.790397882 +0000 UTC m=+36.313473746 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:10 crc kubenswrapper[4820]: E0203 12:05:10.790429 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:18.790423522 +0000 UTC m=+36.313499386 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:10 crc kubenswrapper[4820]: E0203 12:05:10.790478 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:10 crc kubenswrapper[4820]: E0203 12:05:10.790500 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:10 crc kubenswrapper[4820]: E0203 12:05:10.790518 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:10 crc kubenswrapper[4820]: E0203 12:05:10.790584 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:18.790562305 +0000 UTC m=+36.313638209 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.858006 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.858053 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.858065 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.858083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.858095 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:10Z","lastTransitionTime":"2026-02-03T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.961391 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.961434 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.961445 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.961462 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:10 crc kubenswrapper[4820]: I0203 12:05:10.961475 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:10Z","lastTransitionTime":"2026-02-03T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.063850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.063914 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.063931 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.063949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.063962 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:11Z","lastTransitionTime":"2026-02-03T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.104599 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 14:40:05.009888917 +0000 UTC Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.142193 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.142193 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:11 crc kubenswrapper[4820]: E0203 12:05:11.142433 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:11 crc kubenswrapper[4820]: E0203 12:05:11.142541 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.143109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:11 crc kubenswrapper[4820]: E0203 12:05:11.143316 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.166948 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.167005 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.167023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.167050 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.167068 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:11Z","lastTransitionTime":"2026-02-03T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.269714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.269776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.269793 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.269815 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.269829 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:11Z","lastTransitionTime":"2026-02-03T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.338110 4820 generic.go:334] "Generic (PLEG): container finished" podID="d93ec7bc-4029-44a4-894d-03eff1388683" containerID="4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359" exitCode=0 Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.338170 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" event={"ID":"d93ec7bc-4029-44a4-894d-03eff1388683","Type":"ContainerDied","Data":"4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359"} Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.355930 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.372407 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.372500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.372520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.372549 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.372521 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.372571 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:11Z","lastTransitionTime":"2026-02-03T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.395166 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.415269 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.428843 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.444071 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.454331 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.472873 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.475442 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.475542 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.475598 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.475655 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.475724 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:11Z","lastTransitionTime":"2026-02-03T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.481866 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.499088 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.511482 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.523697 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.540198 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.552155 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.564685 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:11Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.577403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.577438 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.577448 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.577461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.577471 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:11Z","lastTransitionTime":"2026-02-03T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.679753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.679796 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.679809 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.679825 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.679839 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:11Z","lastTransitionTime":"2026-02-03T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.784548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.784588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.784597 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.784612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.784621 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:11Z","lastTransitionTime":"2026-02-03T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.887212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.887247 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.887256 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.887274 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.887284 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:11Z","lastTransitionTime":"2026-02-03T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.989124 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.989164 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.989172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.989186 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:11 crc kubenswrapper[4820]: I0203 12:05:11.989197 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:11Z","lastTransitionTime":"2026-02-03T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.091130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.091194 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.091211 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.091238 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.091256 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:12Z","lastTransitionTime":"2026-02-03T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.104718 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:35:46.999942584 +0000 UTC Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.193451 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.193512 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.193530 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.193558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.193576 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:12Z","lastTransitionTime":"2026-02-03T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.295550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.295612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.295631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.295656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.295674 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:12Z","lastTransitionTime":"2026-02-03T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.344653 4820 generic.go:334] "Generic (PLEG): container finished" podID="d93ec7bc-4029-44a4-894d-03eff1388683" containerID="8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3" exitCode=0 Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.344720 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" event={"ID":"d93ec7bc-4029-44a4-894d-03eff1388683","Type":"ContainerDied","Data":"8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3"} Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.349985 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerStarted","Data":"3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382"} Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.350386 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.350421 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.350434 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.366559 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.378410 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.380029 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.380755 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.394986 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.397774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.397823 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.397833 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.397852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.397864 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:12Z","lastTransitionTime":"2026-02-03T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.412807 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.428292 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.440770 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.453758 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.464841 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.475752 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.492631 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.499989 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.500022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.500031 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.500598 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.500644 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:12Z","lastTransitionTime":"2026-02-03T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.502282 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.519832 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.531448 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.544417 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.558504 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.573871 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.584599 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.596991 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.603746 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.603785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.603798 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.603815 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.603828 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:12Z","lastTransitionTime":"2026-02-03T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.614352 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.629011 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.641221 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.652623 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.676703 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.695736 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.706521 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.706565 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.706575 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.706590 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.706602 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:12Z","lastTransitionTime":"2026-02-03T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.721762 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.731541 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.749697 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.761643 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.777212 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.790856 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:12Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.808493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.808534 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.808546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.808563 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.808576 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:12Z","lastTransitionTime":"2026-02-03T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.910797 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.910849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.910864 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.910913 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.910927 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:12Z","lastTransitionTime":"2026-02-03T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:12 crc kubenswrapper[4820]: I0203 12:05:12.942539 4820 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.012713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.012742 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.012751 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.012763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.012771 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:13Z","lastTransitionTime":"2026-02-03T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.105648 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 05:32:58.414842175 +0000 UTC Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.115052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.115095 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.115106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.115121 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.115131 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:13Z","lastTransitionTime":"2026-02-03T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.141616 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.141719 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:13 crc kubenswrapper[4820]: E0203 12:05:13.141765 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:13 crc kubenswrapper[4820]: E0203 12:05:13.141926 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.142003 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:13 crc kubenswrapper[4820]: E0203 12:05:13.142136 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.163516 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.178814 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.193127 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.211803 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.217208 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.217252 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.217263 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.217280 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.217292 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:13Z","lastTransitionTime":"2026-02-03T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.231448 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.247188 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.259388 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.271409 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.284177 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.300650 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.310652 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.319514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.319552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.319561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.319575 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.319586 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:13Z","lastTransitionTime":"2026-02-03T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.329780 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.342705 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.355811 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" event={"ID":"d93ec7bc-4029-44a4-894d-03eff1388683","Type":"ContainerStarted","Data":"c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62"} Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.359530 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.371996 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.382981 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.393198 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.404541 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.415247 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.422223 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.422250 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.422259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.422273 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.422284 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:13Z","lastTransitionTime":"2026-02-03T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.435213 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.448311 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.462160 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.472947 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.483641 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.493805 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.502224 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.513265 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.524230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.524276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.524288 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.524304 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.524314 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:13Z","lastTransitionTime":"2026-02-03T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.532433 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.543228 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.564827 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:13Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.627202 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.627504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.627640 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.627819 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.627988 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:13Z","lastTransitionTime":"2026-02-03T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.730405 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.730772 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.730968 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.731111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.731322 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:13Z","lastTransitionTime":"2026-02-03T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.834570 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.834831 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.834937 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.835015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.835082 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:13Z","lastTransitionTime":"2026-02-03T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.937487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.937536 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.937550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.937574 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:13 crc kubenswrapper[4820]: I0203 12:05:13.937587 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:13Z","lastTransitionTime":"2026-02-03T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.039969 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.040002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.040013 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.040028 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.040040 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:14Z","lastTransitionTime":"2026-02-03T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.105961 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:03:44.382602254 +0000 UTC Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.141993 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.142043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.142059 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.142080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.142094 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:14Z","lastTransitionTime":"2026-02-03T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.244464 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.245220 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.245253 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.245275 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.245290 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:14Z","lastTransitionTime":"2026-02-03T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.347369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.347397 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.347407 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.347420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.347428 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:14Z","lastTransitionTime":"2026-02-03T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.449739 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.449774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.449784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.449796 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.449805 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:14Z","lastTransitionTime":"2026-02-03T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.551988 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.552057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.552070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.552087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.552099 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:14Z","lastTransitionTime":"2026-02-03T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.654678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.654711 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.654720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.654733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.654742 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:14Z","lastTransitionTime":"2026-02-03T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.757209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.757270 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.757290 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.757314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.757333 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:14Z","lastTransitionTime":"2026-02-03T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.860697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.860760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.860800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.860854 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.860879 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:14Z","lastTransitionTime":"2026-02-03T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.963795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.963860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.963922 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.963954 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:14 crc kubenswrapper[4820]: I0203 12:05:14.963972 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:14Z","lastTransitionTime":"2026-02-03T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.066124 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.066160 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.066171 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.066187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.066199 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:15Z","lastTransitionTime":"2026-02-03T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.107174 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:38:47.674364123 +0000 UTC Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.142024 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.142108 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.142024 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:15 crc kubenswrapper[4820]: E0203 12:05:15.142235 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:15 crc kubenswrapper[4820]: E0203 12:05:15.142414 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:15 crc kubenswrapper[4820]: E0203 12:05:15.142564 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.169073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.169141 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.169158 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.169183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.169200 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:15Z","lastTransitionTime":"2026-02-03T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.272197 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.272247 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.272263 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.272287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.272303 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:15Z","lastTransitionTime":"2026-02-03T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.366625 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/0.log" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.369971 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerID="3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382" exitCode=1 Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.370014 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerDied","Data":"3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382"} Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.370677 4820 scope.go:117] "RemoveContainer" containerID="3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.374308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.374387 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.374412 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.374446 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.374470 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:15Z","lastTransitionTime":"2026-02-03T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.383878 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.406612 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.427493 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.442258 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.454323 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.465340 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.476282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.476325 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.476337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.476355 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.476366 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:15Z","lastTransitionTime":"2026-02-03T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.476387 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.493342 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.508440 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.520952 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.538477 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:14Z\\\",\\\"message\\\":\\\"42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 12:05:14.573188 6106 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0203 12:05:14.573149 6106 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}\\\\nI0203 12:05:14.573229 6106 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 713.096µs\\\\nI0203 12:05:14.573566 6106 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0203 12:05:14.573602 6106 ovnkube.go:599] Stopped ovnkube\\\\nI0203 12:05:14.573620 6106 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 12:05:14.573675 6106 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.556202 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.569552 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.579117 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.579173 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.579189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.579210 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.579225 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:15Z","lastTransitionTime":"2026-02-03T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.583208 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.601372 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.681588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.681624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.681632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.681647 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.681659 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:15Z","lastTransitionTime":"2026-02-03T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.783548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.783603 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.783614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.783632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.783644 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:15Z","lastTransitionTime":"2026-02-03T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.886472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.886510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.886519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.886532 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.886541 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:15Z","lastTransitionTime":"2026-02-03T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.989014 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.989053 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.989062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.989077 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:15 crc kubenswrapper[4820]: I0203 12:05:15.989088 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:15Z","lastTransitionTime":"2026-02-03T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.091558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.091607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.091622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.091643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.091659 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:16Z","lastTransitionTime":"2026-02-03T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.108263 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:43:59.513789228 +0000 UTC Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.194360 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.194391 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.194399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.194413 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.194423 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:16Z","lastTransitionTime":"2026-02-03T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.297059 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.297114 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.297126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.297143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.297155 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:16Z","lastTransitionTime":"2026-02-03T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.374777 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/0.log" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.377704 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerStarted","Data":"555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6"} Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.378135 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.392443 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.399374 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.399423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.399436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.399452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.399464 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:16Z","lastTransitionTime":"2026-02-03T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.405163 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.417995 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.428180 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.440124 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.453535 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.465564 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.482791 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:14Z\\\",\\\"message\\\":\\\"42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 12:05:14.573188 6106 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0203 12:05:14.573149 6106 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}\\\\nI0203 12:05:14.573229 6106 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 713.096µs\\\\nI0203 12:05:14.573566 6106 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0203 12:05:14.573602 6106 ovnkube.go:599] Stopped ovnkube\\\\nI0203 12:05:14.573620 6106 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 12:05:14.573675 6106 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.492211 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.503024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.503073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.503085 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.503099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.503108 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:16Z","lastTransitionTime":"2026-02-03T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.510822 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.523022 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.533758 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.547424 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.559699 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.570643 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.605588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.605628 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.605638 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.605652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.605661 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:16Z","lastTransitionTime":"2026-02-03T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.624945 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp"] Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.625373 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.627205 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.627420 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.643761 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.655096 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.667552 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.678429 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.688628 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.699143 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.707461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.707551 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.707566 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.707586 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.707596 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:16Z","lastTransitionTime":"2026-02-03T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.710100 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.720972 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.731517 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.741735 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.747366 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d8005fd9-8efc-4707-a3dd-60cd20607d42-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8bbpp\" (UID: \"d8005fd9-8efc-4707-a3dd-60cd20607d42\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.747410 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8005fd9-8efc-4707-a3dd-60cd20607d42-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8bbpp\" (UID: \"d8005fd9-8efc-4707-a3dd-60cd20607d42\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.747434 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9sc\" (UniqueName: \"kubernetes.io/projected/d8005fd9-8efc-4707-a3dd-60cd20607d42-kube-api-access-xg9sc\") pod \"ovnkube-control-plane-749d76644c-8bbpp\" (UID: \"d8005fd9-8efc-4707-a3dd-60cd20607d42\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.747477 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d8005fd9-8efc-4707-a3dd-60cd20607d42-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8bbpp\" (UID: \"d8005fd9-8efc-4707-a3dd-60cd20607d42\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.752192 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.763976 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.777916 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.791950 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.810703 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.810788 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.810820 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.810851 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.810876 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:16Z","lastTransitionTime":"2026-02-03T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.817531 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:14Z\\\",\\\"message\\\":\\\"42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 12:05:14.573188 6106 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0203 12:05:14.573149 6106 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}\\\\nI0203 12:05:14.573229 6106 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 713.096µs\\\\nI0203 12:05:14.573566 6106 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0203 12:05:14.573602 6106 ovnkube.go:599] Stopped ovnkube\\\\nI0203 12:05:14.573620 6106 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 12:05:14.573675 6106 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.840453 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.848578 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8005fd9-8efc-4707-a3dd-60cd20607d42-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8bbpp\" (UID: \"d8005fd9-8efc-4707-a3dd-60cd20607d42\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.848627 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg9sc\" (UniqueName: \"kubernetes.io/projected/d8005fd9-8efc-4707-a3dd-60cd20607d42-kube-api-access-xg9sc\") pod \"ovnkube-control-plane-749d76644c-8bbpp\" (UID: \"d8005fd9-8efc-4707-a3dd-60cd20607d42\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.848675 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d8005fd9-8efc-4707-a3dd-60cd20607d42-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8bbpp\" (UID: \"d8005fd9-8efc-4707-a3dd-60cd20607d42\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.848728 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d8005fd9-8efc-4707-a3dd-60cd20607d42-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8bbpp\" (UID: \"d8005fd9-8efc-4707-a3dd-60cd20607d42\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.849470 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d8005fd9-8efc-4707-a3dd-60cd20607d42-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8bbpp\" (UID: \"d8005fd9-8efc-4707-a3dd-60cd20607d42\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.849516 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d8005fd9-8efc-4707-a3dd-60cd20607d42-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8bbpp\" (UID: \"d8005fd9-8efc-4707-a3dd-60cd20607d42\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.854342 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8005fd9-8efc-4707-a3dd-60cd20607d42-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8bbpp\" (UID: \"d8005fd9-8efc-4707-a3dd-60cd20607d42\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.865235 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg9sc\" (UniqueName: \"kubernetes.io/projected/d8005fd9-8efc-4707-a3dd-60cd20607d42-kube-api-access-xg9sc\") pod \"ovnkube-control-plane-749d76644c-8bbpp\" (UID: \"d8005fd9-8efc-4707-a3dd-60cd20607d42\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.913554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.913598 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.913610 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.913628 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.913640 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:16Z","lastTransitionTime":"2026-02-03T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:16 crc kubenswrapper[4820]: I0203 12:05:16.940777 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" Feb 03 12:05:16 crc kubenswrapper[4820]: W0203 12:05:16.953348 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8005fd9_8efc_4707_a3dd_60cd20607d42.slice/crio-2a0d48233b01a6ba1749432c29318e09b2997e0cbbd6751ae4681e63974ab30c WatchSource:0}: Error finding container 2a0d48233b01a6ba1749432c29318e09b2997e0cbbd6751ae4681e63974ab30c: Status 404 returned error can't find the container with id 2a0d48233b01a6ba1749432c29318e09b2997e0cbbd6751ae4681e63974ab30c Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.016312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.016373 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.016387 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.016407 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.016420 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:17Z","lastTransitionTime":"2026-02-03T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.108569 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:02:55.691768955 +0000 UTC Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.118528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.118562 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.118575 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.118595 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.118608 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:17Z","lastTransitionTime":"2026-02-03T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.142125 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:17 crc kubenswrapper[4820]: E0203 12:05:17.142503 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.142158 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:17 crc kubenswrapper[4820]: E0203 12:05:17.142755 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.142141 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:17 crc kubenswrapper[4820]: E0203 12:05:17.143025 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.220902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.220945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.221234 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.221267 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.221280 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:17Z","lastTransitionTime":"2026-02-03T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.323680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.323718 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.323730 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.323748 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.323761 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:17Z","lastTransitionTime":"2026-02-03T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.382267 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/1.log" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.383015 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/0.log" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.385251 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerID="555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6" exitCode=1 Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.385276 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerDied","Data":"555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6"} Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.385445 4820 scope.go:117] "RemoveContainer" containerID="3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.386134 4820 scope.go:117] "RemoveContainer" containerID="555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6" Feb 03 12:05:17 crc kubenswrapper[4820]: E0203 12:05:17.386330 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.388058 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" event={"ID":"d8005fd9-8efc-4707-a3dd-60cd20607d42","Type":"ContainerStarted","Data":"9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008"} Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.388113 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" event={"ID":"d8005fd9-8efc-4707-a3dd-60cd20607d42","Type":"ContainerStarted","Data":"2a0d48233b01a6ba1749432c29318e09b2997e0cbbd6751ae4681e63974ab30c"} Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.397971 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.417800 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.425786 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.426000 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.426085 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.426150 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.426207 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:17Z","lastTransitionTime":"2026-02-03T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.430947 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.445976 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.458227 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.473279 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.486352 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.495420 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.507508 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.523838 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.528332 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.528373 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.528384 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.528402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.528411 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:17Z","lastTransitionTime":"2026-02-03T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.539917 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.551551 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.563668 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.581803 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:14Z\\\",\\\"message\\\":\\\"42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 12:05:14.573188 6106 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0203 12:05:14.573149 6106 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}\\\\nI0203 12:05:14.573229 6106 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 713.096µs\\\\nI0203 12:05:14.573566 6106 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0203 12:05:14.573602 6106 ovnkube.go:599] Stopped ovnkube\\\\nI0203 12:05:14.573620 6106 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 12:05:14.573675 6106 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\" Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:16.132830 6249 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 997.232µs\\\\nI0203 12:05:16.132706 6249 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:05:16.132841 6249 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry for network=default\\\\nI0203 12:05:16.132752 6249 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:05:16.132849 6249 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.590977 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.609552 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.631006 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.631083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.631096 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.631118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.631136 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:17Z","lastTransitionTime":"2026-02-03T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.733389 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.733437 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.733450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.733468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.733481 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:17Z","lastTransitionTime":"2026-02-03T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.835802 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.835917 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.835937 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.835964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.835983 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:17Z","lastTransitionTime":"2026-02-03T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.939438 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.939485 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.939496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.939513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:17 crc kubenswrapper[4820]: I0203 12:05:17.939525 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:17Z","lastTransitionTime":"2026-02-03T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.042222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.042292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.042304 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.042322 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.042335 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:18Z","lastTransitionTime":"2026-02-03T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.069695 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7vz6k"] Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.070211 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.070290 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.082170 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.093420 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.104579 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.109423 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:56:29.420595336 +0000 UTC Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.125013 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.141916 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.144409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.144474 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.144484 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.144498 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.144508 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:18Z","lastTransitionTime":"2026-02-03T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.158227 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.160624 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjk4\" (UniqueName: \"kubernetes.io/projected/6351e457-e601-4889-853c-560646bc4b43-kube-api-access-jbjk4\") pod \"network-metrics-daemon-7vz6k\" (UID: \"6351e457-e601-4889-853c-560646bc4b43\") " pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.160700 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs\") pod \"network-metrics-daemon-7vz6k\" (UID: \"6351e457-e601-4889-853c-560646bc4b43\") " pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.173354 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.185820 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.196267 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.212324 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:14Z\\\",\\\"message\\\":\\\"42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 12:05:14.573188 6106 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0203 12:05:14.573149 6106 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}\\\\nI0203 12:05:14.573229 6106 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 713.096µs\\\\nI0203 12:05:14.573566 6106 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0203 12:05:14.573602 6106 ovnkube.go:599] Stopped ovnkube\\\\nI0203 12:05:14.573620 6106 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 12:05:14.573675 6106 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\" Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:16.132830 6249 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 997.232µs\\\\nI0203 12:05:16.132706 6249 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:05:16.132841 6249 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry for network=default\\\\nI0203 12:05:16.132752 6249 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:05:16.132849 6249 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.220344 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.235973 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.247081 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.247126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.247134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.247149 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.247158 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:18Z","lastTransitionTime":"2026-02-03T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.248061 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.260049 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.261442 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs\") pod \"network-metrics-daemon-7vz6k\" (UID: \"6351e457-e601-4889-853c-560646bc4b43\") " pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.261502 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjk4\" (UniqueName: \"kubernetes.io/projected/6351e457-e601-4889-853c-560646bc4b43-kube-api-access-jbjk4\") pod \"network-metrics-daemon-7vz6k\" (UID: \"6351e457-e601-4889-853c-560646bc4b43\") " pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.261613 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.261681 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs podName:6351e457-e601-4889-853c-560646bc4b43 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:18.76166361 +0000 UTC m=+36.284739474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs") pod "network-metrics-daemon-7vz6k" (UID: "6351e457-e601-4889-853c-560646bc4b43") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.272255 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.277526 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjk4\" (UniqueName: \"kubernetes.io/projected/6351e457-e601-4889-853c-560646bc4b43-kube-api-access-jbjk4\") pod \"network-metrics-daemon-7vz6k\" (UID: \"6351e457-e601-4889-853c-560646bc4b43\") " pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.284287 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.295973 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.349371 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.349406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.349414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.349427 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.349436 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:18Z","lastTransitionTime":"2026-02-03T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.392812 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" event={"ID":"d8005fd9-8efc-4707-a3dd-60cd20607d42","Type":"ContainerStarted","Data":"a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a"} Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.394982 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/1.log" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.398633 4820 scope.go:117] "RemoveContainer" containerID="555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6" Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.398804 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.407067 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.420074 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.435214 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.446582 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.451450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.451491 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.451499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.451516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.451525 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:18Z","lastTransitionTime":"2026-02-03T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.459382 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.472345 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.482437 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.494263 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.505787 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.518565 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.529047 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.539442 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.554422 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.554488 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.554501 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.554519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.554530 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:18Z","lastTransitionTime":"2026-02-03T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.573380 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.598434 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.612536 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.630799 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe28479c3e6fe91f32b394f521e23bfd1cd3d0d7f0d053c200ea4b020f09382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:14Z\\\",\\\"message\\\":\\\"42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 12:05:14.573188 6106 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0203 12:05:14.573149 6106 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}\\\\nI0203 12:05:14.573229 6106 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 713.096µs\\\\nI0203 12:05:14.573566 6106 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0203 12:05:14.573602 6106 ovnkube.go:599] Stopped ovnkube\\\\nI0203 12:05:14.573620 6106 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 12:05:14.573675 6106 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\" Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:16.132830 6249 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 997.232µs\\\\nI0203 12:05:16.132706 6249 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:05:16.132841 6249 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry for network=default\\\\nI0203 12:05:16.132752 6249 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:05:16.132849 6249 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.648283 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.656595 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.656640 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.656654 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.656669 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.656679 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:18Z","lastTransitionTime":"2026-02-03T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.660936 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.671879 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.689196 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.698450 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.708343 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.719587 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.729358 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.742586 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.758204 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.759031 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.759075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.759087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.759103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.759115 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:18Z","lastTransitionTime":"2026-02-03T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.766475 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.766606 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs\") pod \"network-metrics-daemon-7vz6k\" (UID: \"6351e457-e601-4889-853c-560646bc4b43\") " pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.766783 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.766847 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs podName:6351e457-e601-4889-853c-560646bc4b43 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:19.766828477 +0000 UTC m=+37.289904341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs") pod "network-metrics-daemon-7vz6k" (UID: "6351e457-e601-4889-853c-560646bc4b43") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.767051 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:05:34.767037132 +0000 UTC m=+52.290113006 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.773452 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.791086 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.809220 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.826758 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.840855 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.862261 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.862309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.862318 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.862334 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.862344 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:18Z","lastTransitionTime":"2026-02-03T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.863507 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\" Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:16.132830 6249 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 997.232µs\\\\nI0203 12:05:16.132706 6249 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:05:16.132841 6249 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry for network=default\\\\nI0203 12:05:16.132752 6249 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:05:16.132849 6249 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.868027 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.868085 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.868124 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.868150 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.868252 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.868316 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.868361 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:34.868331357 +0000 UTC m=+52.391407261 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.868366 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.868376 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.868395 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.868482 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:34.868454729 +0000 UTC m=+52.391530703 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.868324 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.868635 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.868654 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:34.868607253 +0000 UTC m=+52.391683147 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.868661 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:18 crc kubenswrapper[4820]: E0203 12:05:18.868753 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:34.868734516 +0000 UTC m=+52.391810420 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.879194 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.899851 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.964547 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.964587 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.964598 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.964613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.964626 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:18Z","lastTransitionTime":"2026-02-03T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.997491 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.997535 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.997544 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.997562 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:18 crc kubenswrapper[4820]: I0203 12:05:18.997572 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:18Z","lastTransitionTime":"2026-02-03T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:19 crc kubenswrapper[4820]: E0203 12:05:19.012552 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.017370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.017397 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.017406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.017418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.017427 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:19Z","lastTransitionTime":"2026-02-03T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:19 crc kubenswrapper[4820]: E0203 12:05:19.028791 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.033110 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.033172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.033183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.033206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.033224 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:19Z","lastTransitionTime":"2026-02-03T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:19 crc kubenswrapper[4820]: E0203 12:05:19.048552 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.053166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.053224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.053243 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.053272 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.053295 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:19Z","lastTransitionTime":"2026-02-03T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:19 crc kubenswrapper[4820]: E0203 12:05:19.068510 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.073604 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.073646 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.073657 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.073677 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.073690 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:19Z","lastTransitionTime":"2026-02-03T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:19 crc kubenswrapper[4820]: E0203 12:05:19.088487 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: E0203 12:05:19.088656 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.090347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.090402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.090413 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.090426 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.090435 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:19Z","lastTransitionTime":"2026-02-03T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.110633 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:34:13.018562581 +0000 UTC Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.142749 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.142846 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.142857 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:19 crc kubenswrapper[4820]: E0203 12:05:19.142969 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:19 crc kubenswrapper[4820]: E0203 12:05:19.143102 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.143169 4820 scope.go:117] "RemoveContainer" containerID="38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37" Feb 03 12:05:19 crc kubenswrapper[4820]: E0203 12:05:19.143198 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.193379 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.193445 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.193460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.193481 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.193494 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:19Z","lastTransitionTime":"2026-02-03T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.299352 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.299634 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.299651 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.299667 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.299679 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:19Z","lastTransitionTime":"2026-02-03T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.401799 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.401845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.401857 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.401874 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.401883 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:19Z","lastTransitionTime":"2026-02-03T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.404670 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.406396 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b"} Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.406825 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.426940 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.440511 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.453925 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.468216 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.478820 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.492732 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.504669 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.504734 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.504753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.504777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.504795 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:19Z","lastTransitionTime":"2026-02-03T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.510441 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.525532 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.548703 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\" Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:16.132830 6249 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 997.232µs\\\\nI0203 12:05:16.132706 6249 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:05:16.132841 6249 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry for network=default\\\\nI0203 12:05:16.132752 6249 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:05:16.132849 6249 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.571254 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.585793 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.600051 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.606953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.606984 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.606993 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.607006 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.607014 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:19Z","lastTransitionTime":"2026-02-03T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.613797 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.624422 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.633815 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.644179 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.653748 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:19Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.709399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.709461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.709471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.709489 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.709502 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:19Z","lastTransitionTime":"2026-02-03T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.777845 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs\") pod \"network-metrics-daemon-7vz6k\" (UID: \"6351e457-e601-4889-853c-560646bc4b43\") " pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:19 crc kubenswrapper[4820]: E0203 12:05:19.778061 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:05:19 crc kubenswrapper[4820]: E0203 12:05:19.778142 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs podName:6351e457-e601-4889-853c-560646bc4b43 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:21.778125383 +0000 UTC m=+39.301201327 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs") pod "network-metrics-daemon-7vz6k" (UID: "6351e457-e601-4889-853c-560646bc4b43") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.812618 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.812676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.812692 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.812715 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.812730 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:19Z","lastTransitionTime":"2026-02-03T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.915457 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.915506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.915518 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.915537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:19 crc kubenswrapper[4820]: I0203 12:05:19.915549 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:19Z","lastTransitionTime":"2026-02-03T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.018604 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.018645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.018656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.018670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.018683 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:20Z","lastTransitionTime":"2026-02-03T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.111623 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:07:03.347196858 +0000 UTC Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.121358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.121392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.121408 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.121423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.121433 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:20Z","lastTransitionTime":"2026-02-03T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.142161 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:20 crc kubenswrapper[4820]: E0203 12:05:20.142292 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.224776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.224863 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.224874 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.224920 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.224936 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:20Z","lastTransitionTime":"2026-02-03T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.327254 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.327294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.327303 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.327317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.327328 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:20Z","lastTransitionTime":"2026-02-03T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.430401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.430454 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.430468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.430496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.430515 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:20Z","lastTransitionTime":"2026-02-03T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.533624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.533710 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.533721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.533744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.533757 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:20Z","lastTransitionTime":"2026-02-03T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.636996 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.637058 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.637069 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.637087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.637097 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:20Z","lastTransitionTime":"2026-02-03T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.739625 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.739668 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.739679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.739694 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.739705 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:20Z","lastTransitionTime":"2026-02-03T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.843361 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.843399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.843410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.843428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.843439 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:20Z","lastTransitionTime":"2026-02-03T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.946007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.946063 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.946081 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.946096 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:20 crc kubenswrapper[4820]: I0203 12:05:20.946105 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:20Z","lastTransitionTime":"2026-02-03T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.049976 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.050022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.050033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.050052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.050063 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:21Z","lastTransitionTime":"2026-02-03T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.111881 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 20:47:57.340716091 +0000 UTC Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.142461 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.142492 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.142642 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:21 crc kubenswrapper[4820]: E0203 12:05:21.142617 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:21 crc kubenswrapper[4820]: E0203 12:05:21.142722 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:21 crc kubenswrapper[4820]: E0203 12:05:21.142787 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.154267 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.154317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.154328 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.154386 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.154401 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:21Z","lastTransitionTime":"2026-02-03T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.257154 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.257201 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.257209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.257224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.257235 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:21Z","lastTransitionTime":"2026-02-03T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.359972 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.360036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.360047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.360063 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.360073 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:21Z","lastTransitionTime":"2026-02-03T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.462577 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.462670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.462684 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.462702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.462713 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:21Z","lastTransitionTime":"2026-02-03T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.565194 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.565229 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.565238 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.565253 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.565262 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:21Z","lastTransitionTime":"2026-02-03T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.669167 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.669213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.669230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.669248 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.669259 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:21Z","lastTransitionTime":"2026-02-03T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.771844 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.771884 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.771926 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.771944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.771958 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:21Z","lastTransitionTime":"2026-02-03T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.799598 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs\") pod \"network-metrics-daemon-7vz6k\" (UID: \"6351e457-e601-4889-853c-560646bc4b43\") " pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:21 crc kubenswrapper[4820]: E0203 12:05:21.799718 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:05:21 crc kubenswrapper[4820]: E0203 12:05:21.799769 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs podName:6351e457-e601-4889-853c-560646bc4b43 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:25.799756023 +0000 UTC m=+43.322831877 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs") pod "network-metrics-daemon-7vz6k" (UID: "6351e457-e601-4889-853c-560646bc4b43") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.874849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.874916 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.874926 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.874942 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.874951 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:21Z","lastTransitionTime":"2026-02-03T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.977807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.977862 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.977877 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.977921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:21 crc kubenswrapper[4820]: I0203 12:05:21.977938 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:21Z","lastTransitionTime":"2026-02-03T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.080080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.080116 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.080126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.080139 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.080148 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:22Z","lastTransitionTime":"2026-02-03T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.112475 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:55:13.190077264 +0000 UTC Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.141800 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:22 crc kubenswrapper[4820]: E0203 12:05:22.141956 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.182686 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.182720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.182731 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.182744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.182753 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:22Z","lastTransitionTime":"2026-02-03T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.285657 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.285731 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.285743 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.285761 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.285776 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:22Z","lastTransitionTime":"2026-02-03T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.388467 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.388505 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.388520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.388534 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.388544 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:22Z","lastTransitionTime":"2026-02-03T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.492002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.492049 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.492059 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.492089 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.492103 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:22Z","lastTransitionTime":"2026-02-03T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.594970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.595010 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.595019 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.595032 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.595042 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:22Z","lastTransitionTime":"2026-02-03T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.696932 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.696971 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.696980 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.696995 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.697004 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:22Z","lastTransitionTime":"2026-02-03T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.799821 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.799862 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.799903 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.799919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.799931 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:22Z","lastTransitionTime":"2026-02-03T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.902020 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.902052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.902061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.902073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:22 crc kubenswrapper[4820]: I0203 12:05:22.902085 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:22Z","lastTransitionTime":"2026-02-03T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.005765 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.005813 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.005823 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.005843 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.005856 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:23Z","lastTransitionTime":"2026-02-03T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.108697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.108741 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.108750 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.108766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.108792 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:23Z","lastTransitionTime":"2026-02-03T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.112827 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:35:04.148465055 +0000 UTC Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.141572 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.141692 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.141572 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:23 crc kubenswrapper[4820]: E0203 12:05:23.141718 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:23 crc kubenswrapper[4820]: E0203 12:05:23.141881 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:23 crc kubenswrapper[4820]: E0203 12:05:23.142226 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.154421 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.165353 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.186499 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.199053 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.210550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.210580 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.210592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.210608 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.210620 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:23Z","lastTransitionTime":"2026-02-03T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.215879 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.228845 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.240978 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.253918 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.267575 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.284732 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\" Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:16.132830 6249 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 997.232µs\\\\nI0203 12:05:16.132706 6249 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:05:16.132841 6249 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry for network=default\\\\nI0203 12:05:16.132752 6249 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:05:16.132849 6249 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.294035 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.312403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.312439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.312449 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.312466 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.312475 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:23Z","lastTransitionTime":"2026-02-03T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.317140 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.329945 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.343724 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.358773 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.372902 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.384749 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:23Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.415347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.415441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.415459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.415519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.415537 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:23Z","lastTransitionTime":"2026-02-03T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.518679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.518717 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.518746 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.518760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.518771 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:23Z","lastTransitionTime":"2026-02-03T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.622230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.622299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.622315 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.622338 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.622354 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:23Z","lastTransitionTime":"2026-02-03T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.725026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.725094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.725106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.725120 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.725131 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:23Z","lastTransitionTime":"2026-02-03T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.827282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.827350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.827368 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.827391 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.827403 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:23Z","lastTransitionTime":"2026-02-03T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.929787 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.929858 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.929872 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.929914 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:23 crc kubenswrapper[4820]: I0203 12:05:23.929929 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:23Z","lastTransitionTime":"2026-02-03T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.032619 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.032677 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.032688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.032754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.032776 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:24Z","lastTransitionTime":"2026-02-03T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.113739 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 08:57:06.455573996 +0000 UTC Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.135333 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.135389 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.135401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.135418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.135430 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:24Z","lastTransitionTime":"2026-02-03T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.142319 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:24 crc kubenswrapper[4820]: E0203 12:05:24.142481 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.238318 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.238392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.238415 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.238445 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.238469 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:24Z","lastTransitionTime":"2026-02-03T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.342313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.342365 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.342381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.342400 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.342412 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:24Z","lastTransitionTime":"2026-02-03T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.444326 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.444389 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.444405 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.444423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.444436 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:24Z","lastTransitionTime":"2026-02-03T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.546971 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.547017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.547028 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.547046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.547059 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:24Z","lastTransitionTime":"2026-02-03T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.649014 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.649068 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.649082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.649105 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.649117 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:24Z","lastTransitionTime":"2026-02-03T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.751934 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.751990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.752004 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.752026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.752039 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:24Z","lastTransitionTime":"2026-02-03T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.854849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.854907 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.854919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.854937 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.854949 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:24Z","lastTransitionTime":"2026-02-03T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.957491 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.957547 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.957558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.957578 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:24 crc kubenswrapper[4820]: I0203 12:05:24.957589 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:24Z","lastTransitionTime":"2026-02-03T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.061149 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.061198 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.061211 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.061230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.061242 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:25Z","lastTransitionTime":"2026-02-03T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.114414 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:02:43.518129978 +0000 UTC Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.142099 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.142099 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.142127 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:25 crc kubenswrapper[4820]: E0203 12:05:25.142254 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:25 crc kubenswrapper[4820]: E0203 12:05:25.142339 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:25 crc kubenswrapper[4820]: E0203 12:05:25.142441 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.164499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.164542 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.164553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.164569 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.164579 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:25Z","lastTransitionTime":"2026-02-03T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.266975 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.267039 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.267052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.267068 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.267081 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:25Z","lastTransitionTime":"2026-02-03T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.369464 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.369510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.369520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.369537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.369549 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:25Z","lastTransitionTime":"2026-02-03T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.471346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.471389 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.471403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.471419 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.471429 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:25Z","lastTransitionTime":"2026-02-03T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.573828 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.573859 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.573867 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.573879 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.573914 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:25Z","lastTransitionTime":"2026-02-03T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.675769 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.675812 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.675820 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.675838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.675855 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:25Z","lastTransitionTime":"2026-02-03T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.777843 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.777930 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.777943 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.777959 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.777986 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:25Z","lastTransitionTime":"2026-02-03T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.842826 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs\") pod \"network-metrics-daemon-7vz6k\" (UID: \"6351e457-e601-4889-853c-560646bc4b43\") " pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:25 crc kubenswrapper[4820]: E0203 12:05:25.843035 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:05:25 crc kubenswrapper[4820]: E0203 12:05:25.843121 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs podName:6351e457-e601-4889-853c-560646bc4b43 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:33.843103987 +0000 UTC m=+51.366179851 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs") pod "network-metrics-daemon-7vz6k" (UID: "6351e457-e601-4889-853c-560646bc4b43") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.881302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.881343 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.881353 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.881366 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.881375 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:25Z","lastTransitionTime":"2026-02-03T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.984415 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.984460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.984472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.984509 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:25 crc kubenswrapper[4820]: I0203 12:05:25.984521 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:25Z","lastTransitionTime":"2026-02-03T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.087163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.087215 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.087226 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.087244 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.087255 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:26Z","lastTransitionTime":"2026-02-03T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.114629 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 11:04:32.982042593 +0000 UTC Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.142081 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:26 crc kubenswrapper[4820]: E0203 12:05:26.142285 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.189248 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.189308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.189321 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.189340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.189352 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:26Z","lastTransitionTime":"2026-02-03T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.293414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.293511 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.293535 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.293607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.293633 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:26Z","lastTransitionTime":"2026-02-03T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.397015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.397101 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.397119 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.397140 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.397186 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:26Z","lastTransitionTime":"2026-02-03T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.499289 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.499337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.499349 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.499365 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.499424 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:26Z","lastTransitionTime":"2026-02-03T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.601506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.601545 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.601555 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.601571 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.601584 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:26Z","lastTransitionTime":"2026-02-03T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.703861 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.703931 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.703944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.703960 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.703972 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:26Z","lastTransitionTime":"2026-02-03T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.805946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.805987 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.805996 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.806010 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.806019 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:26Z","lastTransitionTime":"2026-02-03T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.908496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.908547 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.908558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.908574 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:26 crc kubenswrapper[4820]: I0203 12:05:26.908585 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:26Z","lastTransitionTime":"2026-02-03T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.010489 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.010530 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.010541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.010559 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.010570 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:27Z","lastTransitionTime":"2026-02-03T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.113061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.113121 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.113133 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.113151 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.113164 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:27Z","lastTransitionTime":"2026-02-03T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.115421 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:06:40.94050804 +0000 UTC Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.142190 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.142256 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.142217 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:27 crc kubenswrapper[4820]: E0203 12:05:27.142385 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:27 crc kubenswrapper[4820]: E0203 12:05:27.142454 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:27 crc kubenswrapper[4820]: E0203 12:05:27.142559 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.216714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.216839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.216869 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.216932 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.216957 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:27Z","lastTransitionTime":"2026-02-03T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.320033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.320065 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.320075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.320090 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.320100 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:27Z","lastTransitionTime":"2026-02-03T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.424537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.424586 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.424597 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.424615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.424628 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:27Z","lastTransitionTime":"2026-02-03T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.527827 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.527865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.527873 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.527899 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.527909 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:27Z","lastTransitionTime":"2026-02-03T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.631237 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.631276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.631287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.631304 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.631318 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:27Z","lastTransitionTime":"2026-02-03T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.733465 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.733517 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.733533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.733553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.733568 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:27Z","lastTransitionTime":"2026-02-03T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.835689 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.835739 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.835749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.835762 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.835771 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:27Z","lastTransitionTime":"2026-02-03T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.938697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.938761 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.938776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.938794 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:27 crc kubenswrapper[4820]: I0203 12:05:27.938810 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:27Z","lastTransitionTime":"2026-02-03T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.040687 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.040738 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.040751 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.040770 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.040782 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:28Z","lastTransitionTime":"2026-02-03T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.115959 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:58:05.216133242 +0000 UTC Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.141546 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:28 crc kubenswrapper[4820]: E0203 12:05:28.141689 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.142615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.142665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.142678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.142697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.142710 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:28Z","lastTransitionTime":"2026-02-03T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.245594 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.245633 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.245642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.245657 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.245665 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:28Z","lastTransitionTime":"2026-02-03T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.348756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.348809 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.348823 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.348845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.348861 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:28Z","lastTransitionTime":"2026-02-03T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.451457 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.451531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.451554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.451582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.451606 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:28Z","lastTransitionTime":"2026-02-03T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.553841 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.553920 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.553935 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.553955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.553969 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:28Z","lastTransitionTime":"2026-02-03T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.656799 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.656878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.656918 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.656994 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.657011 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:28Z","lastTransitionTime":"2026-02-03T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.760354 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.760400 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.760414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.760437 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.760452 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:28Z","lastTransitionTime":"2026-02-03T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.862554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.862589 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.862597 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.862612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.862623 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:28Z","lastTransitionTime":"2026-02-03T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.964613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.964680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.964690 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.964705 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:28 crc kubenswrapper[4820]: I0203 12:05:28.964716 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:28Z","lastTransitionTime":"2026-02-03T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.067326 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.067384 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.067399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.067421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.067434 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:29Z","lastTransitionTime":"2026-02-03T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.116184 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:57:39.853666724 +0000 UTC Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.141614 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:29 crc kubenswrapper[4820]: E0203 12:05:29.141735 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.141628 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.141936 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:29 crc kubenswrapper[4820]: E0203 12:05:29.142122 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:29 crc kubenswrapper[4820]: E0203 12:05:29.142277 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.170260 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.170295 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.170303 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.170316 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.170325 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:29Z","lastTransitionTime":"2026-02-03T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.272718 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.272772 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.272780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.272795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.272805 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:29Z","lastTransitionTime":"2026-02-03T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.315430 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.315484 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.315493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.315506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.315516 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:29Z","lastTransitionTime":"2026-02-03T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:29 crc kubenswrapper[4820]: E0203 12:05:29.334306 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:29Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.338305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.338341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.338352 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.338369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.338380 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:29Z","lastTransitionTime":"2026-02-03T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:29 crc kubenswrapper[4820]: E0203 12:05:29.354574 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:29Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.357842 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.357941 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.357952 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.357968 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.357979 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:29Z","lastTransitionTime":"2026-02-03T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:29 crc kubenswrapper[4820]: E0203 12:05:29.372229 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:29Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.375340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.375378 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.375390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.375404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.375414 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:29Z","lastTransitionTime":"2026-02-03T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:29 crc kubenswrapper[4820]: E0203 12:05:29.386472 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:29Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.389921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.389992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.390009 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.390034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.390052 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:29Z","lastTransitionTime":"2026-02-03T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:29 crc kubenswrapper[4820]: E0203 12:05:29.406485 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:29Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:29 crc kubenswrapper[4820]: E0203 12:05:29.406731 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.408417 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.408467 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.408480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.408502 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.408516 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:29Z","lastTransitionTime":"2026-02-03T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.510971 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.511021 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.511032 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.511048 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.511059 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:29Z","lastTransitionTime":"2026-02-03T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.613436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.613481 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.613496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.613512 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.613524 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:29Z","lastTransitionTime":"2026-02-03T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.716227 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.716279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.716292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.716310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.716321 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:29Z","lastTransitionTime":"2026-02-03T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.818796 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.818844 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.818854 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.818869 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.818880 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:29Z","lastTransitionTime":"2026-02-03T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.922279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.922348 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.922364 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.922392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:29 crc kubenswrapper[4820]: I0203 12:05:29.922409 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:29Z","lastTransitionTime":"2026-02-03T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.024774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.024840 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.024857 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.024879 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.024926 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:30Z","lastTransitionTime":"2026-02-03T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.116939 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:35:18.241952635 +0000 UTC Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.126974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.127016 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.127030 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.127045 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.127258 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:30Z","lastTransitionTime":"2026-02-03T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.142261 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:30 crc kubenswrapper[4820]: E0203 12:05:30.142403 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.229709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.229749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.229758 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.229775 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.229786 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:30Z","lastTransitionTime":"2026-02-03T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.332244 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.332289 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.332300 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.332317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.332328 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:30Z","lastTransitionTime":"2026-02-03T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.435158 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.435209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.435225 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.435246 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.435256 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:30Z","lastTransitionTime":"2026-02-03T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.446095 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.454387 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.459205 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.469746 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.479669 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.490037 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.501123 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.513046 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.525445 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.537931 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.537967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.537978 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.537995 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.538008 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:30Z","lastTransitionTime":"2026-02-03T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.538155 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.552651 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.571274 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\" Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:16.132830 6249 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 997.232µs\\\\nI0203 12:05:16.132706 6249 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:05:16.132841 6249 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry for network=default\\\\nI0203 12:05:16.132752 6249 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:05:16.132849 6249 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.581018 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.597575 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.614608 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.640426 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.640474 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.640485 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.640504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.640515 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:30Z","lastTransitionTime":"2026-02-03T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.648522 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.665558 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.680574 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.695000 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:30Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.743083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.743114 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.743124 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.743137 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.743145 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:30Z","lastTransitionTime":"2026-02-03T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.845583 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.845619 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.845627 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.845642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.845651 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:30Z","lastTransitionTime":"2026-02-03T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.947750 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.947796 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.947806 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.947819 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:30 crc kubenswrapper[4820]: I0203 12:05:30.947828 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:30Z","lastTransitionTime":"2026-02-03T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.051007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.051051 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.051061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.051075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.051085 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:31Z","lastTransitionTime":"2026-02-03T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.117258 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:24:12.778086136 +0000 UTC Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.142223 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.142288 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.142321 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:31 crc kubenswrapper[4820]: E0203 12:05:31.142649 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:31 crc kubenswrapper[4820]: E0203 12:05:31.142877 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.142961 4820 scope.go:117] "RemoveContainer" containerID="555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6" Feb 03 12:05:31 crc kubenswrapper[4820]: E0203 12:05:31.142942 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.153592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.153628 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.153640 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.153658 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.153671 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:31Z","lastTransitionTime":"2026-02-03T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.256881 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.257366 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.257392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.257421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.257443 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:31Z","lastTransitionTime":"2026-02-03T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.359451 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.359510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.359530 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.359588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.359630 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:31Z","lastTransitionTime":"2026-02-03T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.462831 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.462873 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.462906 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.462926 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.462939 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:31Z","lastTransitionTime":"2026-02-03T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.564967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.565018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.565040 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.565061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.565076 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:31Z","lastTransitionTime":"2026-02-03T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.667584 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.667631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.667645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.667661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.667672 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:31Z","lastTransitionTime":"2026-02-03T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.763794 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.769813 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.769836 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.769844 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.769858 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.769867 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:31Z","lastTransitionTime":"2026-02-03T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.782969 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.796807 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.807105 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.817419 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.828614 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.840803 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.850672 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76aa9d47-b80a-4058-8f92-4cdf0c41df48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47438e7dc8d1eec9a8a061c1b6141e39f4c805dfafd1175c07235f7b9425719b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade7b4944a0da0f0d18c5dd5a92ef407d6beac17985f1429d57672ff7e9beade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f489476174f52e5f5d252e3f3f86f65e1593e641204d863bebdaa80b49bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.862625 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.872503 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.872549 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.872561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.872578 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.872589 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:31Z","lastTransitionTime":"2026-02-03T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.874232 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.884089 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.903816 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\" Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:16.132830 6249 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 997.232µs\\\\nI0203 12:05:16.132706 6249 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:05:16.132841 6249 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry for network=default\\\\nI0203 12:05:16.132752 6249 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:05:16.132849 6249 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.913283 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.929815 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.940208 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.954159 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.967217 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.975351 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.975395 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.975406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.975420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.975429 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:31Z","lastTransitionTime":"2026-02-03T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.980635 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:31 crc kubenswrapper[4820]: I0203 12:05:31.991093 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:31Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.077816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.077856 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.077864 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.077879 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.077909 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:32Z","lastTransitionTime":"2026-02-03T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.117400 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 12:52:46.003779587 +0000 UTC Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.141959 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:32 crc kubenswrapper[4820]: E0203 12:05:32.142086 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.180882 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.180987 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.181008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.181031 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.181049 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:32Z","lastTransitionTime":"2026-02-03T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.284325 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.284366 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.284378 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.284396 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.284411 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:32Z","lastTransitionTime":"2026-02-03T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.387651 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.387690 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.387700 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.387718 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.387729 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:32Z","lastTransitionTime":"2026-02-03T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.446073 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/1.log" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.448413 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerStarted","Data":"ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb"} Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.448916 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.461935 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.472122 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76aa9d47-b80a-4058-8f92-4cdf0c41df48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47438e7dc8d1eec9a8a061c1b6141e39f4c805dfafd1175c07235f7b9425719b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade7b4944a0da0f0d18c5dd5a92ef407d6beac17985f1429d57672ff7e9beade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f489476174f52e5f5d252e3f3f86f65e1593e641204d863bebdaa80b49bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.484330 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.490330 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.490372 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.490384 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.490401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.490412 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:32Z","lastTransitionTime":"2026-02-03T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.497573 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.508517 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.518317 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.528684 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.542950 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.563455 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\" Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:16.132830 6249 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 997.232µs\\\\nI0203 12:05:16.132706 6249 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:05:16.132841 6249 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry for network=default\\\\nI0203 12:05:16.132752 6249 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:05:16.132849 6249 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.574072 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.595473 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.595592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.595630 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.595643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.595660 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.595679 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:32Z","lastTransitionTime":"2026-02-03T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.610611 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.625802 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.640245 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.651541 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.661451 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.672346 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.684735 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:32Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.698344 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.698377 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.698387 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.698400 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.698411 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:32Z","lastTransitionTime":"2026-02-03T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.800807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.800839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.800850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.800866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.800878 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:32Z","lastTransitionTime":"2026-02-03T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.904192 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.904247 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.904260 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.904279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:32 crc kubenswrapper[4820]: I0203 12:05:32.904292 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:32Z","lastTransitionTime":"2026-02-03T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.007206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.007263 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.007272 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.007284 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.007292 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:33Z","lastTransitionTime":"2026-02-03T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.109949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.109992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.110005 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.110023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.110033 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:33Z","lastTransitionTime":"2026-02-03T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.118116 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:27:45.873726736 +0000 UTC Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.141581 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.141706 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.141944 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:33 crc kubenswrapper[4820]: E0203 12:05:33.142002 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:33 crc kubenswrapper[4820]: E0203 12:05:33.142101 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:33 crc kubenswrapper[4820]: E0203 12:05:33.142198 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.161734 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.175245 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.186678 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.200142 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.211504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.211560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.211574 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.211595 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.211608 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:33Z","lastTransitionTime":"2026-02-03T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.222359 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.234435 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.248787 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.261419 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.274172 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.285318 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.304772 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.313612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.313665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.313673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.313688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.313700 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:33Z","lastTransitionTime":"2026-02-03T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.323931 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.335455 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76aa9d47-b80a-4058-8f92-4cdf0c41df48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47438e7dc8d1eec9a8a061c1b6141e39f4c805dfafd1175c07235f7b9425719b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade7b4944a0da0f0d18c5dd5a92ef407d6beac17985f1429d57672ff7e9beade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f489476174f52e5f5d252e3f3f86f65e1593e641204d863bebdaa80b49bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.349301 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.361749 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.376021 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.393503 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\" Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:16.132830 6249 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 997.232µs\\\\nI0203 12:05:16.132706 6249 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:05:16.132841 6249 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry for network=default\\\\nI0203 12:05:16.132752 6249 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:05:16.132849 6249 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.410750 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.416266 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.416307 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.416316 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.416330 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.416340 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:33Z","lastTransitionTime":"2026-02-03T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.453067 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/2.log" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.453748 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/1.log" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.456159 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerID="ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb" exitCode=1 Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.456199 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerDied","Data":"ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb"} Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.456242 4820 scope.go:117] "RemoveContainer" containerID="555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.456752 4820 scope.go:117] "RemoveContainer" containerID="ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb" Feb 03 12:05:33 crc kubenswrapper[4820]: E0203 12:05:33.456900 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.471293 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.481964 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.491220 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.501077 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.514743 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.517865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.517947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.517963 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.517987 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.518002 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:33Z","lastTransitionTime":"2026-02-03T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.527625 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.539431 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76aa9d47-b80a-4058-8f92-4cdf0c41df48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47438e7dc8d1eec9a8a061c1b6141e39f4c805dfafd1175c07235f7b9425719b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade7b4944a0da0f0d18c5dd5a92ef407d6beac17985f1429d57672ff7e9beade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f489476174f52e5f5d252e3f3f86f65e1593e641204d863bebdaa80b49bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.550787 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.569138 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555c29829adf28ac93542af1cb8ab28456a9414b416a9913602ee1dbd0bc27a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"message\\\":\\\" Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:16.132830 6249 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 997.232µs\\\\nI0203 12:05:16.132706 6249 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:05:16.132841 6249 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry for network=default\\\\nI0203 12:05:16.132752 6249 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:05:16.132849 6249 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:32Z\\\",\\\"message\\\":\\\"ig-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:32.735113 6493 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-qj7xr\\\\nI0203 12:05:32.734944 6493 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-p5mx8 in node crc\\\\nI0203 12:05:32.735121 6493 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 12:05:32.735128 6493 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-p5mx8 after 0 failed attempt(s)\\\\nI0203 12:05:32.735133 6493 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0203 12:05:32.735160 6493 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp\\\\nF0203 12:05:32.735195 6493 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.579769 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.600922 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.614284 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.620506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.620542 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.620551 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.620565 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.620574 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:33Z","lastTransitionTime":"2026-02-03T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.624639 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.634436 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.645425 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.656099 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.667875 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.680681 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:33Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.723317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.723357 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.723366 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.723379 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.723391 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:33Z","lastTransitionTime":"2026-02-03T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.825752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.825819 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.825830 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.825845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.825857 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:33Z","lastTransitionTime":"2026-02-03T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.859395 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs\") pod \"network-metrics-daemon-7vz6k\" (UID: \"6351e457-e601-4889-853c-560646bc4b43\") " pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:33 crc kubenswrapper[4820]: E0203 12:05:33.859600 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:05:33 crc kubenswrapper[4820]: E0203 12:05:33.859710 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs podName:6351e457-e601-4889-853c-560646bc4b43 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:49.859690045 +0000 UTC m=+67.382765979 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs") pod "network-metrics-daemon-7vz6k" (UID: "6351e457-e601-4889-853c-560646bc4b43") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.927865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.927933 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.927946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.927963 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:33 crc kubenswrapper[4820]: I0203 12:05:33.927974 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:33Z","lastTransitionTime":"2026-02-03T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.030694 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.030732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.030742 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.030755 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.030765 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:34Z","lastTransitionTime":"2026-02-03T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.118498 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 12:24:33.634324784 +0000 UTC Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.133338 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.133380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.133392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.133408 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.133420 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:34Z","lastTransitionTime":"2026-02-03T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.141753 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.141921 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.236330 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.236390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.236407 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.236429 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.236441 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:34Z","lastTransitionTime":"2026-02-03T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.338519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.338558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.338569 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.338581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.338591 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:34Z","lastTransitionTime":"2026-02-03T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.441356 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.441409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.441423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.441440 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.441451 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:34Z","lastTransitionTime":"2026-02-03T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.461365 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/2.log" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.465835 4820 scope.go:117] "RemoveContainer" containerID="ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb" Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.466014 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.479132 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.491072 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.503229 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.515611 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76aa9d47-b80a-4058-8f92-4cdf0c41df48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47438e7dc8d1eec9a8a061c1b6141e39f4c805dfafd1175c07235f7b9425719b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade7b4944a0da0f0d18c5dd5a92ef407d6beac17985f1429d57672ff7e9beade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f489476174f52e5f5d252e3f3f86f65e1593e641204d863bebdaa80b49bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.527579 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.539355 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.544493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.544567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.544588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.544615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.544644 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:34Z","lastTransitionTime":"2026-02-03T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.552598 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.561911 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.571819 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.585136 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.601558 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:32Z\\\",\\\"message\\\":\\\"ig-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:32.735113 6493 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-qj7xr\\\\nI0203 12:05:32.734944 6493 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-p5mx8 in node crc\\\\nI0203 12:05:32.735121 6493 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 12:05:32.735128 6493 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-p5mx8 after 0 failed attempt(s)\\\\nI0203 12:05:32.735133 6493 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0203 12:05:32.735160 6493 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp\\\\nF0203 12:05:32.735195 6493 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.609995 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.625807 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.635298 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.646773 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.646802 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.646813 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.646828 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.646839 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:34Z","lastTransitionTime":"2026-02-03T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.646940 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.660839 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.670209 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.680227 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:34Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.749840 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.749963 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.749983 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.750006 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.750023 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:34Z","lastTransitionTime":"2026-02-03T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.767695 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.767953 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:06:06.767849933 +0000 UTC m=+84.290925817 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.852787 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.852855 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.852872 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.852938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.852976 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:34Z","lastTransitionTime":"2026-02-03T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.869170 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.869237 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.869299 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.869351 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.869390 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.869433 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:06.869412644 +0000 UTC m=+84.392488528 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.869448 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.869485 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.869579 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:06.869547177 +0000 UTC m=+84.392623101 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.869359 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.869662 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.869691 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.869730 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.869753 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.869696 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:06.869687291 +0000 UTC m=+84.392763175 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:34 crc kubenswrapper[4820]: E0203 12:05:34.869940 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:06.869877735 +0000 UTC m=+84.392953639 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.956317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.956390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.956412 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.956443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:34 crc kubenswrapper[4820]: I0203 12:05:34.956464 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:34Z","lastTransitionTime":"2026-02-03T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.059746 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.059802 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.059816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.059839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.059852 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:35Z","lastTransitionTime":"2026-02-03T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.119548 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:33:07.348699939 +0000 UTC Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.142084 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.142167 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.142211 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:35 crc kubenswrapper[4820]: E0203 12:05:35.142264 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:35 crc kubenswrapper[4820]: E0203 12:05:35.142423 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:35 crc kubenswrapper[4820]: E0203 12:05:35.142553 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.162874 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.162945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.162959 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.162979 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.162994 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:35Z","lastTransitionTime":"2026-02-03T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.265800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.265841 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.265855 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.265872 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.265906 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:35Z","lastTransitionTime":"2026-02-03T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.368675 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.368720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.368732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.368749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.368762 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:35Z","lastTransitionTime":"2026-02-03T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.471464 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.471534 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.471552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.471577 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.471595 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:35Z","lastTransitionTime":"2026-02-03T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.575003 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.575076 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.575100 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.575129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.575146 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:35Z","lastTransitionTime":"2026-02-03T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.677374 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.677422 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.677438 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.677455 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.677467 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:35Z","lastTransitionTime":"2026-02-03T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.780620 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.780673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.780687 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.780709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.780726 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:35Z","lastTransitionTime":"2026-02-03T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.883168 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.883197 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.883206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.883218 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.883229 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:35Z","lastTransitionTime":"2026-02-03T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.984968 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.985003 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.985014 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.985027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:35 crc kubenswrapper[4820]: I0203 12:05:35.985039 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:35Z","lastTransitionTime":"2026-02-03T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.087516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.087573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.087588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.087610 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.087625 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:36Z","lastTransitionTime":"2026-02-03T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.120193 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:55:07.370424354 +0000 UTC Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.141874 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:36 crc kubenswrapper[4820]: E0203 12:05:36.142435 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.191104 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.191158 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.191166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.191212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.191231 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:36Z","lastTransitionTime":"2026-02-03T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.294537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.294865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.295002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.295108 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.295204 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:36Z","lastTransitionTime":"2026-02-03T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.398168 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.398455 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.398541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.398671 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.398761 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:36Z","lastTransitionTime":"2026-02-03T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.501763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.501831 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.501854 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.501883 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.501939 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:36Z","lastTransitionTime":"2026-02-03T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.604526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.604560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.604570 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.604583 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.604592 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:36Z","lastTransitionTime":"2026-02-03T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.706606 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.706669 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.706685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.706706 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.706720 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:36Z","lastTransitionTime":"2026-02-03T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.809529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.809581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.809594 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.809621 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.809632 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:36Z","lastTransitionTime":"2026-02-03T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.912213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.912250 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.912265 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.912281 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:36 crc kubenswrapper[4820]: I0203 12:05:36.912293 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:36Z","lastTransitionTime":"2026-02-03T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.014860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.014914 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.014922 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.014934 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.014943 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:37Z","lastTransitionTime":"2026-02-03T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.117635 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.117673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.117683 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.117696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.117706 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:37Z","lastTransitionTime":"2026-02-03T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.121006 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:25:43.612480003 +0000 UTC Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.142270 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:37 crc kubenswrapper[4820]: E0203 12:05:37.142444 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.142964 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:37 crc kubenswrapper[4820]: E0203 12:05:37.143072 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.143181 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:37 crc kubenswrapper[4820]: E0203 12:05:37.143336 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.220642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.220684 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.220693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.220706 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.220746 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:37Z","lastTransitionTime":"2026-02-03T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.323876 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.324371 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.324450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.324537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.324955 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:37Z","lastTransitionTime":"2026-02-03T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.428715 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.428773 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.428787 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.428804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.428816 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:37Z","lastTransitionTime":"2026-02-03T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.531574 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.531622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.531633 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.531649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.531658 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:37Z","lastTransitionTime":"2026-02-03T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.634602 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.634652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.634664 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.634682 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.634695 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:37Z","lastTransitionTime":"2026-02-03T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.737555 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.737591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.737603 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.737621 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.737633 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:37Z","lastTransitionTime":"2026-02-03T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.839345 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.839383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.839392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.839407 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.839417 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:37Z","lastTransitionTime":"2026-02-03T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.942049 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.942568 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.942636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.942701 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:37 crc kubenswrapper[4820]: I0203 12:05:37.942778 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:37Z","lastTransitionTime":"2026-02-03T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.044753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.045020 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.045098 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.045171 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.045237 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:38Z","lastTransitionTime":"2026-02-03T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.121691 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:43:08.084543587 +0000 UTC Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.142112 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:38 crc kubenswrapper[4820]: E0203 12:05:38.142243 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.147695 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.147980 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.148075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.148171 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.148260 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:38Z","lastTransitionTime":"2026-02-03T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.251711 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.252109 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.252261 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.252402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.252529 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:38Z","lastTransitionTime":"2026-02-03T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.355606 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.355708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.355752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.355775 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.355790 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:38Z","lastTransitionTime":"2026-02-03T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.458143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.458191 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.458200 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.458215 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.458224 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:38Z","lastTransitionTime":"2026-02-03T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.560783 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.560817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.560828 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.560843 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.560856 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:38Z","lastTransitionTime":"2026-02-03T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.662910 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.663178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.663441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.663572 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.663701 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:38Z","lastTransitionTime":"2026-02-03T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.766533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.766588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.766605 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.766624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.766639 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:38Z","lastTransitionTime":"2026-02-03T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.869882 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.870304 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.870427 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.870555 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.870700 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:38Z","lastTransitionTime":"2026-02-03T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.973229 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.973299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.973317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.973339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:38 crc kubenswrapper[4820]: I0203 12:05:38.973356 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:38Z","lastTransitionTime":"2026-02-03T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.076159 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.076194 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.076202 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.076215 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.076224 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:39Z","lastTransitionTime":"2026-02-03T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.122155 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:04:49.254637 +0000 UTC Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.142487 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.142535 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.142667 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:39 crc kubenswrapper[4820]: E0203 12:05:39.142665 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:39 crc kubenswrapper[4820]: E0203 12:05:39.142918 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:39 crc kubenswrapper[4820]: E0203 12:05:39.143430 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.178700 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.178740 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.178749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.178761 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.178771 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:39Z","lastTransitionTime":"2026-02-03T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.280610 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.280641 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.280649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.280670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.280687 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:39Z","lastTransitionTime":"2026-02-03T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.383323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.383361 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.383370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.383383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.383392 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:39Z","lastTransitionTime":"2026-02-03T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.485709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.486002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.486135 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.486255 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.486337 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:39Z","lastTransitionTime":"2026-02-03T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.571676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.571713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.571723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.571737 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.571748 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:39Z","lastTransitionTime":"2026-02-03T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:39 crc kubenswrapper[4820]: E0203 12:05:39.582750 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.586353 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.586379 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.586389 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.586404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.586415 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:39Z","lastTransitionTime":"2026-02-03T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:39 crc kubenswrapper[4820]: E0203 12:05:39.598476 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.601294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.601321 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.601330 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.601343 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.601352 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:39Z","lastTransitionTime":"2026-02-03T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:39 crc kubenswrapper[4820]: E0203 12:05:39.613252 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.616156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.616195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.616206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.616224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.616237 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:39Z","lastTransitionTime":"2026-02-03T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:39 crc kubenswrapper[4820]: E0203 12:05:39.627443 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.630560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.630633 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.630649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.630670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.630688 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:39Z","lastTransitionTime":"2026-02-03T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:39 crc kubenswrapper[4820]: E0203 12:05:39.641101 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:39 crc kubenswrapper[4820]: E0203 12:05:39.641282 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.642539 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.642575 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.642584 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.642600 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.642609 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:39Z","lastTransitionTime":"2026-02-03T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.744736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.744770 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.744781 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.744796 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.744807 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:39Z","lastTransitionTime":"2026-02-03T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.847455 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.847501 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.847514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.847533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.847545 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:39Z","lastTransitionTime":"2026-02-03T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.950397 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.950458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.950476 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.950501 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:39 crc kubenswrapper[4820]: I0203 12:05:39.950537 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:39Z","lastTransitionTime":"2026-02-03T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.052652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.052686 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.052694 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.052707 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.052716 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:40Z","lastTransitionTime":"2026-02-03T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.123278 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:58:53.72504389 +0000 UTC Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.141747 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:40 crc kubenswrapper[4820]: E0203 12:05:40.141870 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.154409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.154436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.154468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.154481 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.154489 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:40Z","lastTransitionTime":"2026-02-03T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.257203 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.257244 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.257255 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.257272 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.257284 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:40Z","lastTransitionTime":"2026-02-03T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.359434 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.359472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.359483 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.359501 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.359514 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:40Z","lastTransitionTime":"2026-02-03T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.461653 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.461695 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.461706 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.461721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.461732 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:40Z","lastTransitionTime":"2026-02-03T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.564607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.564652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.564663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.564679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.564688 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:40Z","lastTransitionTime":"2026-02-03T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.667671 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.668026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.668040 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.668058 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.668069 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:40Z","lastTransitionTime":"2026-02-03T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.770297 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.770346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.770361 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.770397 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.770411 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:40Z","lastTransitionTime":"2026-02-03T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.873239 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.873283 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.873295 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.873309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.873319 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:40Z","lastTransitionTime":"2026-02-03T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.976156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.976198 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.976209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.976227 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:40 crc kubenswrapper[4820]: I0203 12:05:40.976238 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:40Z","lastTransitionTime":"2026-02-03T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.078940 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.079003 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.079016 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.079035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.079048 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:41Z","lastTransitionTime":"2026-02-03T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.123927 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:28:28.350472047 +0000 UTC Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.142616 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.142611 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:41 crc kubenswrapper[4820]: E0203 12:05:41.142764 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:41 crc kubenswrapper[4820]: E0203 12:05:41.142812 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.142654 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:41 crc kubenswrapper[4820]: E0203 12:05:41.142884 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.181142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.181193 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.181204 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.181220 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.181234 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:41Z","lastTransitionTime":"2026-02-03T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.284165 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.284206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.284219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.284239 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.284254 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:41Z","lastTransitionTime":"2026-02-03T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.387951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.388004 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.388018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.388040 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.388058 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:41Z","lastTransitionTime":"2026-02-03T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.489873 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.489959 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.489977 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.489999 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.490016 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:41Z","lastTransitionTime":"2026-02-03T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.592195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.592268 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.592292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.592321 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.592343 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:41Z","lastTransitionTime":"2026-02-03T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.696201 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.696264 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.696283 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.696309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.696333 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:41Z","lastTransitionTime":"2026-02-03T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.799360 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.799406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.799417 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.799437 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.799453 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:41Z","lastTransitionTime":"2026-02-03T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.902429 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.902480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.902497 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.902519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:41 crc kubenswrapper[4820]: I0203 12:05:41.902536 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:41Z","lastTransitionTime":"2026-02-03T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.004775 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.004816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.004827 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.004844 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.004857 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:42Z","lastTransitionTime":"2026-02-03T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.107919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.108467 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.108578 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.108669 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.108765 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:42Z","lastTransitionTime":"2026-02-03T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.124351 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:54:25.863690953 +0000 UTC Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.141524 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:42 crc kubenswrapper[4820]: E0203 12:05:42.141848 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.211704 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.211757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.211773 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.211795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.211811 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:42Z","lastTransitionTime":"2026-02-03T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.316282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.316362 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.316381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.316404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.316424 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:42Z","lastTransitionTime":"2026-02-03T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.419720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.419788 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.419805 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.419830 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.419847 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:42Z","lastTransitionTime":"2026-02-03T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.523095 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.523208 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.523270 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.523301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.523319 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:42Z","lastTransitionTime":"2026-02-03T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.625841 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.625902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.625912 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.625944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.625954 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:42Z","lastTransitionTime":"2026-02-03T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.728148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.728181 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.728189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.728202 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.728211 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:42Z","lastTransitionTime":"2026-02-03T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.831958 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.832340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.832489 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.832640 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.832784 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:42Z","lastTransitionTime":"2026-02-03T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.935945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.936404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.936622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.936800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:42 crc kubenswrapper[4820]: I0203 12:05:42.937003 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:42Z","lastTransitionTime":"2026-02-03T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.039823 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.040262 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.040488 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.040704 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.040854 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:43Z","lastTransitionTime":"2026-02-03T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.124614 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 23:54:21.236685728 +0000 UTC Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.141568 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.141605 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.141605 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:43 crc kubenswrapper[4820]: E0203 12:05:43.141728 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:43 crc kubenswrapper[4820]: E0203 12:05:43.141915 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:43 crc kubenswrapper[4820]: E0203 12:05:43.141983 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.143222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.143246 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.143286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.143305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.143316 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:43Z","lastTransitionTime":"2026-02-03T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.156544 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.168108 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.180107 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.192028 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76aa9d47-b80a-4058-8f92-4cdf0c41df48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47438e7dc8d1eec9a8a061c1b6141e39f4c805dfafd1175c07235f7b9425719b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade7b4944a0da0f0d18c5dd5a92ef407d6beac17985f1429d57672ff7e9beade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f489476174f52e5f5d252e3f3f86f65e1593e641204d863bebdaa80b49bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.204876 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.218576 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.230270 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.240805 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.245208 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.245240 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.245253 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.245269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.245280 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:43Z","lastTransitionTime":"2026-02-03T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.251150 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.262671 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.279297 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:32Z\\\",\\\"message\\\":\\\"ig-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:32.735113 6493 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-qj7xr\\\\nI0203 12:05:32.734944 6493 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-p5mx8 in node crc\\\\nI0203 12:05:32.735121 6493 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 12:05:32.735128 6493 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-p5mx8 after 0 failed attempt(s)\\\\nI0203 12:05:32.735133 6493 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0203 12:05:32.735160 6493 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp\\\\nF0203 12:05:32.735195 6493 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.288168 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.304422 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.318193 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.329636 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.342942 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.350727 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.350771 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.350803 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.350818 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.350832 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:43Z","lastTransitionTime":"2026-02-03T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.352972 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.362497 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.453731 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.453764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.453773 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.453784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.453793 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:43Z","lastTransitionTime":"2026-02-03T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.556961 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.557008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.557023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.557046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.557064 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:43Z","lastTransitionTime":"2026-02-03T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.659496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.659583 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.659603 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.659631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.659652 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:43Z","lastTransitionTime":"2026-02-03T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.762083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.762145 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.762164 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.762185 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.762201 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:43Z","lastTransitionTime":"2026-02-03T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.864685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.864738 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.864749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.864765 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.864777 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:43Z","lastTransitionTime":"2026-02-03T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.971079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.971121 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.971130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.971146 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:43 crc kubenswrapper[4820]: I0203 12:05:43.971157 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:43Z","lastTransitionTime":"2026-02-03T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.073785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.073838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.073854 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.073876 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.073915 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:44Z","lastTransitionTime":"2026-02-03T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.126251 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:30:21.054678656 +0000 UTC Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.141847 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:44 crc kubenswrapper[4820]: E0203 12:05:44.142045 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.176385 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.176432 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.176443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.176459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.176471 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:44Z","lastTransitionTime":"2026-02-03T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.279212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.279261 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.279273 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.279292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.279305 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:44Z","lastTransitionTime":"2026-02-03T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.382060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.382145 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.382162 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.382186 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.382201 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:44Z","lastTransitionTime":"2026-02-03T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.484872 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.484983 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.485010 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.485037 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.485056 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:44Z","lastTransitionTime":"2026-02-03T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.587702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.587753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.587769 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.587788 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.587803 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:44Z","lastTransitionTime":"2026-02-03T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.690527 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.690590 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.690601 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.690615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.690625 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:44Z","lastTransitionTime":"2026-02-03T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.795588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.795659 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.795679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.795704 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.795723 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:44Z","lastTransitionTime":"2026-02-03T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.899266 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.899341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.899360 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.899386 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:44 crc kubenswrapper[4820]: I0203 12:05:44.899410 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:44Z","lastTransitionTime":"2026-02-03T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.001747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.001792 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.001803 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.001818 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.001830 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:45Z","lastTransitionTime":"2026-02-03T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.104567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.104648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.104664 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.104684 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.104698 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:45Z","lastTransitionTime":"2026-02-03T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.126939 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:25:42.131519141 +0000 UTC Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.142444 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.142554 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.142580 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:45 crc kubenswrapper[4820]: E0203 12:05:45.142654 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:45 crc kubenswrapper[4820]: E0203 12:05:45.142769 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:45 crc kubenswrapper[4820]: E0203 12:05:45.143042 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.208033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.208095 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.208117 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.208142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.208162 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:45Z","lastTransitionTime":"2026-02-03T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.311284 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.311359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.311374 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.311390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.311404 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:45Z","lastTransitionTime":"2026-02-03T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.413692 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.413737 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.413746 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.413759 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.413769 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:45Z","lastTransitionTime":"2026-02-03T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.516665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.516769 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.516792 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.516820 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.516850 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:45Z","lastTransitionTime":"2026-02-03T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.619288 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.619333 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.619348 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.619367 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.619382 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:45Z","lastTransitionTime":"2026-02-03T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.722175 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.722209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.722217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.722229 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.722238 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:45Z","lastTransitionTime":"2026-02-03T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.825015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.825062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.825110 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.825135 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.825152 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:45Z","lastTransitionTime":"2026-02-03T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.928066 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.928125 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.928143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.928165 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:45 crc kubenswrapper[4820]: I0203 12:05:45.928182 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:45Z","lastTransitionTime":"2026-02-03T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.031280 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.031479 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.031493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.031513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.031529 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:46Z","lastTransitionTime":"2026-02-03T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.127233 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 01:03:26.85719514 +0000 UTC Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.134289 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.134330 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.134346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.134364 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.134380 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:46Z","lastTransitionTime":"2026-02-03T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.143364 4820 scope.go:117] "RemoveContainer" containerID="ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb" Feb 03 12:05:46 crc kubenswrapper[4820]: E0203 12:05:46.143596 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.143838 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:46 crc kubenswrapper[4820]: E0203 12:05:46.143984 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.237099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.237201 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.237540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.237595 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.237621 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:46Z","lastTransitionTime":"2026-02-03T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.340255 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.340291 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.340302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.340316 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.340328 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:46Z","lastTransitionTime":"2026-02-03T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.443365 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.443443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.443467 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.443498 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.443522 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:46Z","lastTransitionTime":"2026-02-03T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.546534 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.546580 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.546594 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.546615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.546631 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:46Z","lastTransitionTime":"2026-02-03T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.648904 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.648947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.648955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.648970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.648979 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:46Z","lastTransitionTime":"2026-02-03T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.751451 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.751505 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.751517 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.751533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.751545 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:46Z","lastTransitionTime":"2026-02-03T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.854644 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.854688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.854704 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.854719 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.854731 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:46Z","lastTransitionTime":"2026-02-03T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.958148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.958226 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.958239 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.958272 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:46 crc kubenswrapper[4820]: I0203 12:05:46.958284 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:46Z","lastTransitionTime":"2026-02-03T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.060736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.060799 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.060817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.060839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.060855 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:47Z","lastTransitionTime":"2026-02-03T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.127766 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:23:03.993266828 +0000 UTC Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.142260 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:47 crc kubenswrapper[4820]: E0203 12:05:47.142430 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.142739 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:47 crc kubenswrapper[4820]: E0203 12:05:47.142847 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.142921 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:47 crc kubenswrapper[4820]: E0203 12:05:47.143063 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.163229 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.163274 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.163286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.163304 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.163313 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:47Z","lastTransitionTime":"2026-02-03T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.265195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.265243 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.265255 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.265271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.265283 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:47Z","lastTransitionTime":"2026-02-03T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.367581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.367637 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.367655 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.367678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.367696 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:47Z","lastTransitionTime":"2026-02-03T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.469612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.469643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.469650 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.469663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.469672 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:47Z","lastTransitionTime":"2026-02-03T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.572617 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.572654 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.572663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.572675 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.572684 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:47Z","lastTransitionTime":"2026-02-03T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.675553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.675604 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.675614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.675631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.675642 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:47Z","lastTransitionTime":"2026-02-03T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.778157 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.778188 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.778197 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.778209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.778219 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:47Z","lastTransitionTime":"2026-02-03T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.881741 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.881785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.881796 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.881812 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.881823 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:47Z","lastTransitionTime":"2026-02-03T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.984404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.984438 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.984446 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.984458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:47 crc kubenswrapper[4820]: I0203 12:05:47.984487 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:47Z","lastTransitionTime":"2026-02-03T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.086632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.086663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.086673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.086688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.086698 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:48Z","lastTransitionTime":"2026-02-03T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.128548 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:19:02.455956608 +0000 UTC Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.141965 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:48 crc kubenswrapper[4820]: E0203 12:05:48.142223 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.191989 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.192042 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.192055 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.192080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.192142 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:48Z","lastTransitionTime":"2026-02-03T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.294877 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.294934 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.294944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.294959 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.294971 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:48Z","lastTransitionTime":"2026-02-03T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.397480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.397511 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.397519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.397532 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.397542 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:48Z","lastTransitionTime":"2026-02-03T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.499817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.499867 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.499879 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.499918 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.499935 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:48Z","lastTransitionTime":"2026-02-03T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.602051 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.602095 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.602104 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.602120 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.602129 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:48Z","lastTransitionTime":"2026-02-03T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.704213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.704264 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.704276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.704292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.704305 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:48Z","lastTransitionTime":"2026-02-03T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.806263 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.806296 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.806304 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.806316 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.806326 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:48Z","lastTransitionTime":"2026-02-03T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.908719 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.908748 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.908756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.908769 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:48 crc kubenswrapper[4820]: I0203 12:05:48.908778 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:48Z","lastTransitionTime":"2026-02-03T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.011786 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.011826 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.011835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.011849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.011859 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.114240 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.114338 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.114350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.114387 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.114400 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.129048 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 07:15:19.64530259 +0000 UTC Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.141918 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.141948 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.142089 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:49 crc kubenswrapper[4820]: E0203 12:05:49.142214 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:49 crc kubenswrapper[4820]: E0203 12:05:49.142270 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:49 crc kubenswrapper[4820]: E0203 12:05:49.142332 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.216581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.216629 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.216640 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.216654 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.216668 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.318642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.318685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.318697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.318712 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.318724 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.421039 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.421088 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.421101 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.421116 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.421126 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.523932 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.523982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.524003 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.524030 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.524046 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.626767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.626822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.626834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.626853 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.626864 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.729067 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.729109 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.729126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.729148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.729160 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.810084 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.810129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.810139 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.810154 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.810164 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: E0203 12:05:49.823469 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.826721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.826753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.826761 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.826792 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.826803 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: E0203 12:05:49.837528 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.840366 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.840406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.840418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.840453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.840495 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: E0203 12:05:49.852334 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.856275 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.856310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.856320 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.856335 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.856346 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: E0203 12:05:49.868555 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.871990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.872035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.872046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.872062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.872073 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: E0203 12:05:49.884291 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4820]: E0203 12:05:49.884452 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.888235 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.888314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.888331 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.888351 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.888369 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.931742 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs\") pod \"network-metrics-daemon-7vz6k\" (UID: \"6351e457-e601-4889-853c-560646bc4b43\") " pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:49 crc kubenswrapper[4820]: E0203 12:05:49.932040 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:05:49 crc kubenswrapper[4820]: E0203 12:05:49.932150 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs podName:6351e457-e601-4889-853c-560646bc4b43 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:21.932122066 +0000 UTC m=+99.455198010 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs") pod "network-metrics-daemon-7vz6k" (UID: "6351e457-e601-4889-853c-560646bc4b43") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.990738 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.990802 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.990814 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.990836 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:49 crc kubenswrapper[4820]: I0203 12:05:49.990847 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:49Z","lastTransitionTime":"2026-02-03T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.092446 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.092477 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.092485 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.092497 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.092505 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:50Z","lastTransitionTime":"2026-02-03T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.129665 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:39:37.690241 +0000 UTC Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.142177 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:50 crc kubenswrapper[4820]: E0203 12:05:50.142325 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.194971 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.195000 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.195008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.195022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.195031 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:50Z","lastTransitionTime":"2026-02-03T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.297580 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.297618 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.297630 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.297644 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.297658 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:50Z","lastTransitionTime":"2026-02-03T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.401015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.401077 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.401089 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.401109 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.401122 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:50Z","lastTransitionTime":"2026-02-03T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.503369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.503408 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.503449 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.503468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.503480 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:50Z","lastTransitionTime":"2026-02-03T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.605513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.605562 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.605578 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.605600 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.605616 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:50Z","lastTransitionTime":"2026-02-03T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.708224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.708286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.708294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.708307 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.708315 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:50Z","lastTransitionTime":"2026-02-03T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.810734 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.810771 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.810781 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.810796 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.810805 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:50Z","lastTransitionTime":"2026-02-03T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.912954 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.913004 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.913020 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.913047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:50 crc kubenswrapper[4820]: I0203 12:05:50.913066 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:50Z","lastTransitionTime":"2026-02-03T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.015436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.015477 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.015486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.015502 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.015511 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:51Z","lastTransitionTime":"2026-02-03T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.118036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.118077 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.118086 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.118100 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.118109 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:51Z","lastTransitionTime":"2026-02-03T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.130263 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:36:23.903522172 +0000 UTC Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.142591 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:51 crc kubenswrapper[4820]: E0203 12:05:51.142722 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.142593 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:51 crc kubenswrapper[4820]: E0203 12:05:51.142791 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.142593 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:51 crc kubenswrapper[4820]: E0203 12:05:51.142954 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.220854 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.220914 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.220923 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.220936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.220946 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:51Z","lastTransitionTime":"2026-02-03T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.323568 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.323602 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.323610 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.323624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.323637 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:51Z","lastTransitionTime":"2026-02-03T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.425978 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.426021 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.426030 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.426042 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.426051 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:51Z","lastTransitionTime":"2026-02-03T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.527937 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.527976 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.527988 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.528003 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.528014 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:51Z","lastTransitionTime":"2026-02-03T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.630603 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.630635 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.630646 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.630661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.630672 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:51Z","lastTransitionTime":"2026-02-03T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.732523 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.732565 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.732579 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.732595 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.732607 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:51Z","lastTransitionTime":"2026-02-03T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.834991 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.835031 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.835044 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.835060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.835071 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:51Z","lastTransitionTime":"2026-02-03T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.937299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.937351 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.937363 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.937380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:51 crc kubenswrapper[4820]: I0203 12:05:51.937393 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:51Z","lastTransitionTime":"2026-02-03T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.039692 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.039737 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.039748 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.039766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.039778 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:52Z","lastTransitionTime":"2026-02-03T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.130781 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:18:20.471584911 +0000 UTC Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.141517 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:52 crc kubenswrapper[4820]: E0203 12:05:52.141653 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.142738 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.142762 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.142770 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.142780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.142789 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:52Z","lastTransitionTime":"2026-02-03T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.245592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.245633 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.245648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.245666 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.245681 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:52Z","lastTransitionTime":"2026-02-03T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.347678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.347924 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.347936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.347950 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.347960 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:52Z","lastTransitionTime":"2026-02-03T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.450087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.450144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.450155 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.450169 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.450179 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:52Z","lastTransitionTime":"2026-02-03T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.517918 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkfwm_c6da6dd5-2847-482b-adc1-d82ead0af3e9/kube-multus/0.log" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.517964 4820 generic.go:334] "Generic (PLEG): container finished" podID="c6da6dd5-2847-482b-adc1-d82ead0af3e9" containerID="b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152" exitCode=1 Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.518009 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkfwm" event={"ID":"c6da6dd5-2847-482b-adc1-d82ead0af3e9","Type":"ContainerDied","Data":"b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152"} Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.518508 4820 scope.go:117] "RemoveContainer" containerID="b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.552213 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.553310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.553343 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.553354 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.553371 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.553383 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:52Z","lastTransitionTime":"2026-02-03T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.566663 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.578410 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.591400 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.602384 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.617758 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.630329 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.641937 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.652627 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.655098 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.655151 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.655169 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.655192 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.655211 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:52Z","lastTransitionTime":"2026-02-03T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.662766 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.674470 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.686708 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"message\\\":\\\"2026-02-03T12:05:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_496e5650-f16a-459b-af6d-b3ce3817a4a3\\\\n2026-02-03T12:05:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_496e5650-f16a-459b-af6d-b3ce3817a4a3 to /host/opt/cni/bin/\\\\n2026-02-03T12:05:07Z [verbose] multus-daemon started\\\\n2026-02-03T12:05:07Z [verbose] Readiness Indicator file check\\\\n2026-02-03T12:05:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.700724 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.713144 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76aa9d47-b80a-4058-8f92-4cdf0c41df48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47438e7dc8d1eec9a8a061c1b6141e39f4c805dfafd1175c07235f7b9425719b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade7b4944a0da0f0d18c5dd5a92ef407d6beac17985f1429d57672ff7e9beade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f489476174f52e5f5d252e3f3f86f65e1593e641204d863bebdaa80b49bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.724462 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.737707 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.755324 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:32Z\\\",\\\"message\\\":\\\"ig-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:32.735113 6493 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-qj7xr\\\\nI0203 12:05:32.734944 6493 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-p5mx8 in node crc\\\\nI0203 12:05:32.735121 6493 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 12:05:32.735128 6493 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-p5mx8 after 0 failed attempt(s)\\\\nI0203 12:05:32.735133 6493 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0203 12:05:32.735160 6493 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp\\\\nF0203 12:05:32.735195 6493 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.757393 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.757429 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.757438 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.757453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.757475 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:52Z","lastTransitionTime":"2026-02-03T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.766175 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.859857 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.859917 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.859930 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.859945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.859955 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:52Z","lastTransitionTime":"2026-02-03T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.963073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.963135 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.963152 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.963177 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:52 crc kubenswrapper[4820]: I0203 12:05:52.963194 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:52Z","lastTransitionTime":"2026-02-03T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.065480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.065520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.065528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.065544 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.065554 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:53Z","lastTransitionTime":"2026-02-03T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.131144 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 16:52:33.312843147 +0000 UTC Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.141535 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.141605 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:53 crc kubenswrapper[4820]: E0203 12:05:53.141677 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.141730 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:53 crc kubenswrapper[4820]: E0203 12:05:53.141794 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:53 crc kubenswrapper[4820]: E0203 12:05:53.141918 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.155813 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"message\\\":\\\"2026-02-03T12:05:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_496e5650-f16a-459b-af6d-b3ce3817a4a3\\\\n2026-02-03T12:05:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_496e5650-f16a-459b-af6d-b3ce3817a4a3 to /host/opt/cni/bin/\\\\n2026-02-03T12:05:07Z [verbose] multus-daemon started\\\\n2026-02-03T12:05:07Z [verbose] Readiness Indicator file check\\\\n2026-02-03T12:05:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.168921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.168946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.168954 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.168968 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.168977 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:53Z","lastTransitionTime":"2026-02-03T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.169509 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.182143 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76aa9d47-b80a-4058-8f92-4cdf0c41df48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47438e7dc8d1eec9a8a061c1b6141e39f4c805dfafd1175c07235f7b9425719b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade7b4944a0da0f0d18c5dd5a92ef407d6beac17985f1429d57672ff7e9beade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f489476174f52e5f5d252e3f3f86f65e1593e641204d863bebdaa80b49bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.196290 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.207988 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.220210 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.231207 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.243564 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.259351 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:32Z\\\",\\\"message\\\":\\\"ig-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:32.735113 6493 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-qj7xr\\\\nI0203 12:05:32.734944 6493 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-p5mx8 in node crc\\\\nI0203 12:05:32.735121 6493 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 12:05:32.735128 6493 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-p5mx8 after 0 failed attempt(s)\\\\nI0203 12:05:32.735133 6493 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0203 12:05:32.735160 6493 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp\\\\nF0203 12:05:32.735195 6493 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.267647 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.271022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.271147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.271166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.271269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.271285 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:53Z","lastTransitionTime":"2026-02-03T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.284157 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.295682 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.308342 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.326487 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.338977 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.348732 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.360395 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.371206 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.373647 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.373683 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.373695 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.373712 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.373724 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:53Z","lastTransitionTime":"2026-02-03T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.475488 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.475528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.475540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.475556 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.475567 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:53Z","lastTransitionTime":"2026-02-03T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.522208 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkfwm_c6da6dd5-2847-482b-adc1-d82ead0af3e9/kube-multus/0.log" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.522267 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkfwm" event={"ID":"c6da6dd5-2847-482b-adc1-d82ead0af3e9","Type":"ContainerStarted","Data":"7340b2bec2b0d79865501f1917315f355972bcfc92d098978899c57df3f93454"} Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.538764 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.550450 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.563091 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.577403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.577444 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.577456 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.577472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.577483 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:53Z","lastTransitionTime":"2026-02-03T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.581819 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.594298 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.603495 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.614561 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.627871 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7340b2bec2b0d79865501f1917315f355972bcfc92d098978899c57df3f93454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"message\\\":\\\"2026-02-03T12:05:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_496e5650-f16a-459b-af6d-b3ce3817a4a3\\\\n2026-02-03T12:05:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_496e5650-f16a-459b-af6d-b3ce3817a4a3 to /host/opt/cni/bin/\\\\n2026-02-03T12:05:07Z [verbose] multus-daemon started\\\\n2026-02-03T12:05:07Z [verbose] Readiness Indicator file check\\\\n2026-02-03T12:05:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.640159 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.651650 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76aa9d47-b80a-4058-8f92-4cdf0c41df48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47438e7dc8d1eec9a8a061c1b6141e39f4c805dfafd1175c07235f7b9425719b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade7b4944a0da0f0d18c5dd5a92ef407d6beac17985f1429d57672ff7e9beade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f489476174f52e5f5d252e3f3f86f65e1593e641204d863bebdaa80b49bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.669753 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:32Z\\\",\\\"message\\\":\\\"ig-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:32.735113 6493 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-qj7xr\\\\nI0203 12:05:32.734944 6493 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-p5mx8 in node crc\\\\nI0203 12:05:32.735121 6493 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 12:05:32.735128 6493 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-p5mx8 after 0 failed attempt(s)\\\\nI0203 12:05:32.735133 6493 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0203 12:05:32.735160 6493 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp\\\\nF0203 12:05:32.735195 6493 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.680163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.680260 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.680277 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.680299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.680312 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:53Z","lastTransitionTime":"2026-02-03T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.683227 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.704993 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.718509 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.733506 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.743106 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.782309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.782348 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.782395 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.782411 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.782423 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:53Z","lastTransitionTime":"2026-02-03T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.788953 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.804164 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.884268 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.884293 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.884301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.884312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.884322 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:53Z","lastTransitionTime":"2026-02-03T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.986642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.986718 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.986732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.986747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:53 crc kubenswrapper[4820]: I0203 12:05:53.986760 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:53Z","lastTransitionTime":"2026-02-03T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.088781 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.088851 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.088872 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.089226 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.089264 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.132312 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:03:25.957855736 +0000 UTC Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.141880 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:54 crc kubenswrapper[4820]: E0203 12:05:54.142048 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.192584 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.192633 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.192644 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.192661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.192673 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.294704 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.294776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.294798 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.294830 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.294852 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.396986 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.397024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.397033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.397046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.397055 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.500388 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.500420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.500430 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.500450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.500670 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.605630 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.605677 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.605689 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.605705 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.605718 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.707969 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.708030 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.708038 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.708052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.708061 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.810557 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.810595 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.810604 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.810618 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.810627 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.913175 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.913228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.913239 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.913256 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4820]: I0203 12:05:54.913267 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.015817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.015854 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.015863 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.015879 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.015901 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.118186 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.118222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.118233 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.118248 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.118294 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.132675 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:19:22.656386912 +0000 UTC Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.142022 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.142069 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.142094 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:55 crc kubenswrapper[4820]: E0203 12:05:55.142145 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:55 crc kubenswrapper[4820]: E0203 12:05:55.142240 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:55 crc kubenswrapper[4820]: E0203 12:05:55.142335 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.221617 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.221665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.221676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.221693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.221705 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.323656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.323688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.323696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.323709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.323717 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.425578 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.425618 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.425626 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.425641 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.425650 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.528174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.528224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.528236 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.528253 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.528264 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.630925 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.630965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.630977 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.630993 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.631003 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.733545 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.733599 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.733611 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.733632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.733648 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.835969 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.836035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.836048 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.836065 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.836074 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.937965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.938005 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.938017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.938030 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4820]: I0203 12:05:55.938039 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.039958 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.039992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.040002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.040017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.040027 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.132799 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:42:13.996300095 +0000 UTC Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.141576 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:56 crc kubenswrapper[4820]: E0203 12:05:56.141737 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.142597 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.142639 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.142652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.142670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.142683 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.245358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.245401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.245413 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.245430 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.245441 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.348392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.348652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.348742 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.348852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.348970 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.451448 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.451510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.451526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.451546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.451558 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.554697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.555027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.555145 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.555242 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.555334 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.658549 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.658596 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.658607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.658622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.658634 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.760468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.760527 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.760538 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.760554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.760567 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.863358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.863393 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.863403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.863418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.863428 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.966313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.966408 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.966431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.966461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4820]: I0203 12:05:56.966485 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.069680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.069744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.069759 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.069781 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.069796 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.133445 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:11:33.586291512 +0000 UTC Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.141916 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.141916 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:57 crc kubenswrapper[4820]: E0203 12:05:57.142064 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.142085 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:57 crc kubenswrapper[4820]: E0203 12:05:57.142215 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:57 crc kubenswrapper[4820]: E0203 12:05:57.142266 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.172257 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.172620 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.172804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.173006 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.173139 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.275719 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.275812 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.275836 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.275863 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.275885 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.378688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.378757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.378774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.378804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.378818 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.481007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.481507 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.481663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.481800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.481871 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.584737 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.584767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.584776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.584789 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.584797 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.687591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.687657 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.687670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.687686 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.687698 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.789971 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.790018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.790033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.790050 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.790062 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.893544 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.893603 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.893669 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.893699 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.893719 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.997060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.997419 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.997558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.997695 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4820]: I0203 12:05:57.997809 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.101224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.101295 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.101309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.101326 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.101339 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.134398 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:55:19.518005231 +0000 UTC Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.141791 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:05:58 crc kubenswrapper[4820]: E0203 12:05:58.142006 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.143609 4820 scope.go:117] "RemoveContainer" containerID="ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.203184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.203215 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.203222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.203255 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.203265 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.306241 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.306346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.306359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.306375 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.306386 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.408504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.408537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.408548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.408563 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.408576 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.510389 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.510430 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.510441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.510457 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.510469 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.539283 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/2.log" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.541921 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerStarted","Data":"9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe"} Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.542344 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.556754 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.567831 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.578044 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.591136 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.604661 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7340b2bec2b0d79865501f1917315f355972bcfc92d098978899c57df3f93454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"message\\\":\\\"2026-02-03T12:05:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_496e5650-f16a-459b-af6d-b3ce3817a4a3\\\\n2026-02-03T12:05:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_496e5650-f16a-459b-af6d-b3ce3817a4a3 to /host/opt/cni/bin/\\\\n2026-02-03T12:05:07Z [verbose] multus-daemon started\\\\n2026-02-03T12:05:07Z [verbose] Readiness Indicator file check\\\\n2026-02-03T12:05:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.612364 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.612402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.612411 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.612426 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.612435 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.618229 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.633939 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76aa9d47-b80a-4058-8f92-4cdf0c41df48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47438e7dc8d1eec9a8a061c1b6141e39f4c805dfafd1175c07235f7b9425719b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade7b4944a0da0f0d18c5dd5a92ef407d6beac17985f1429d57672ff7e9beade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f489476174f52e5f5d252e3f3f86f65e1593e641204d863bebdaa80b49bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.653098 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.686128 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:32Z\\\",\\\"message\\\":\\\"ig-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:32.735113 6493 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-qj7xr\\\\nI0203 12:05:32.734944 6493 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-p5mx8 in node crc\\\\nI0203 12:05:32.735121 6493 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 12:05:32.735128 6493 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-p5mx8 after 0 failed attempt(s)\\\\nI0203 12:05:32.735133 6493 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0203 12:05:32.735160 6493 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp\\\\nF0203 12:05:32.735195 6493 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.696539 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.714710 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.714748 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.714757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.714771 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.714781 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.715202 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.731202 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.741446 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.752320 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.766751 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.786263 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.797678 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.816972 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.817027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.817039 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.817057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.817069 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.817655 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.919170 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.919207 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.919218 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.919233 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4820]: I0203 12:05:58.919245 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.022075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.022118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.022131 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.022145 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.022154 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.123869 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.123925 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.123939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.123954 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.123965 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.135241 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:27:03.205071131 +0000 UTC Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.141708 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:59 crc kubenswrapper[4820]: E0203 12:05:59.141856 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.141963 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.141716 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:59 crc kubenswrapper[4820]: E0203 12:05:59.142100 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:59 crc kubenswrapper[4820]: E0203 12:05:59.142185 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.225753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.225786 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.225795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.225807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.225816 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.328418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.328461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.328473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.328492 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.328504 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.431646 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.431693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.431707 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.431726 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.431739 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.534697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.534760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.534782 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.534844 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.534878 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.546358 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/3.log" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.547038 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/2.log" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.549942 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerID="9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe" exitCode=1 Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.549976 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerDied","Data":"9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe"} Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.550007 4820 scope.go:117] "RemoveContainer" containerID="ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.550603 4820 scope.go:117] "RemoveContainer" containerID="9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe" Feb 03 12:05:59 crc kubenswrapper[4820]: E0203 12:05:59.550780 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.565590 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.595090 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.614947 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.627955 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.637929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.637969 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.637980 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.637996 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.638008 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.639024 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.649517 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.662386 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7340b2bec2b0d79865501f1917315f355972bcfc92d098978899c57df3f93454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"message\\\":\\\"2026-02-03T12:05:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_496e5650-f16a-459b-af6d-b3ce3817a4a3\\\\n2026-02-03T12:05:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_496e5650-f16a-459b-af6d-b3ce3817a4a3 to /host/opt/cni/bin/\\\\n2026-02-03T12:05:07Z [verbose] multus-daemon started\\\\n2026-02-03T12:05:07Z [verbose] Readiness Indicator file check\\\\n2026-02-03T12:05:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.674429 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.684432 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76aa9d47-b80a-4058-8f92-4cdf0c41df48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47438e7dc8d1eec9a8a061c1b6141e39f4c805dfafd1175c07235f7b9425719b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade7b4944a0da0f0d18c5dd5a92ef407d6beac17985f1429d57672ff7e9beade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f489476174f52e5f5d252e3f3f86f65e1593e641204d863bebdaa80b49bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.695316 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.714717 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf188d2f9bb71f6217f5c2fbd0199f0d9539bbc773152b66fb63c41cc52c2bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:32Z\\\",\\\"message\\\":\\\"ig-daemon-qj7xr after 0 failed attempt(s)\\\\nI0203 12:05:32.735113 6493 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-qj7xr\\\\nI0203 12:05:32.734944 6493 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-p5mx8 in node crc\\\\nI0203 12:05:32.735121 6493 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0203 12:05:32.735128 6493 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-p5mx8 after 0 failed attempt(s)\\\\nI0203 12:05:32.735133 6493 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0203 12:05:32.735160 6493 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp\\\\nF0203 12:05:32.735195 6493 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"message\\\":\\\"ntroller-manager/route-controller-manager for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0203 12:05:59.050731 6902 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0203 12:05:59.050738 6902 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0203 12:05:59.050751 6902 services_controller.go:451] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.724734 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.740199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.740247 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.740258 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.740276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.740287 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.753362 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.766577 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.777218 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.791407 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.802269 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.813592 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.843152 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.843227 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.843326 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.843379 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.843403 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.946022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.946286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.946307 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.946327 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4820]: I0203 12:05:59.946340 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.048817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.048861 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.048870 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.048914 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.048927 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.099985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.100023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.100033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.100047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.100057 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4820]: E0203 12:06:00.111876 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.115998 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.116043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.116062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.116084 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.116100 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4820]: E0203 12:06:00.131519 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.135452 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:28:36.127987067 +0000 UTC Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.135528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.135563 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.135573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.135586 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.135594 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.142473 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:00 crc kubenswrapper[4820]: E0203 12:06:00.142605 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:00 crc kubenswrapper[4820]: E0203 12:06:00.148929 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.152425 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.152458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.152467 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.152480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.152489 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4820]: E0203 12:06:00.162852 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.166701 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.166764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.166783 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.166804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.166818 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4820]: E0203 12:06:00.179378 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: E0203 12:06:00.179559 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.181299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.181358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.181375 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.181400 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.181416 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.283368 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.283452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.283478 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.283510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.283533 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.386368 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.386414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.386429 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.386449 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.386464 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.489057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.489134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.489150 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.489172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.489188 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.554912 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/3.log" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.557918 4820 scope.go:117] "RemoveContainer" containerID="9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe" Feb 03 12:06:00 crc kubenswrapper[4820]: E0203 12:06:00.558052 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.571143 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.583665 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.590956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.590992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.591001 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.591017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.591031 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.598266 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7340b2bec2b0d79865501f1917315f355972bcfc92d098978899c57df3f93454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"message\\\":\\\"2026-02-03T12:05:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_496e5650-f16a-459b-af6d-b3ce3817a4a3\\\\n2026-02-03T12:05:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_496e5650-f16a-459b-af6d-b3ce3817a4a3 to /host/opt/cni/bin/\\\\n2026-02-03T12:05:07Z [verbose] multus-daemon started\\\\n2026-02-03T12:05:07Z [verbose] Readiness Indicator file check\\\\n2026-02-03T12:05:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.610642 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.623542 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76aa9d47-b80a-4058-8f92-4cdf0c41df48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47438e7dc8d1eec9a8a061c1b6141e39f4c805dfafd1175c07235f7b9425719b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade7b4944a0da0f0d18c5dd5a92ef407d6beac17985f1429d57672ff7e9beade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f489476174f52e5f5d252e3f3f86f65e1593e641204d863bebdaa80b49bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.637202 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.650145 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.690751 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.693832 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.693855 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.693863 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.693875 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.693899 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.701993 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.712768 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.729985 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"message\\\":\\\"ntroller-manager/route-controller-manager for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0203 12:05:59.050731 6902 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0203 12:05:59.050738 6902 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0203 12:05:59.050751 6902 services_controller.go:451] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.740383 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.758942 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.771412 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.783114 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.795947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.795982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.795990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.796003 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.796012 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.798187 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.808041 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.817534 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.898109 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.898139 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.898147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.898159 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4820]: I0203 12:06:00.898168 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.000648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.000687 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.000698 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.000713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.000724 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.104050 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.104098 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.104110 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.104127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.104139 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.135553 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:13:02.63534409 +0000 UTC Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.142660 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.142818 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:01 crc kubenswrapper[4820]: E0203 12:06:01.142962 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:01 crc kubenswrapper[4820]: E0203 12:06:01.142813 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.142671 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:01 crc kubenswrapper[4820]: E0203 12:06:01.143040 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.205977 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.206013 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.206023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.206039 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.206050 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.308697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.308735 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.308743 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.308756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.308765 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.414552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.414589 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.414598 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.414611 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.414624 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.517517 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.517590 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.517614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.517642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.517665 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.620033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.620070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.620078 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.620094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.620111 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.722850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.722968 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.722987 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.723004 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.723016 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.825339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.825408 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.825420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.825435 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.825448 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.927943 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.927971 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.927980 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.927995 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4820]: I0203 12:06:01.928007 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.030664 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.030727 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.030739 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.030760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.030774 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.133854 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.133905 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.133916 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.133930 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.133941 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.136292 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:31:12.829575103 +0000 UTC Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.141585 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:02 crc kubenswrapper[4820]: E0203 12:06:02.141860 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.155232 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.236917 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.236941 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.236949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.236962 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.236971 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.339514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.339562 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.339573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.339590 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.339601 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.442084 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.442147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.442158 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.442172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.442180 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.544475 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.544543 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.544551 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.544564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.544573 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.646778 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.646826 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.646834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.646847 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.646857 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.749300 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.749336 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.749344 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.749358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.749367 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.852443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.852481 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.852509 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.852526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.852538 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.955394 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.955457 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.955473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.955498 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4820]: I0203 12:06:02.955514 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.057533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.057598 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.057615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.057637 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.057654 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.136682 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:18:41.762860403 +0000 UTC Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.142312 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.142366 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.142483 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:03 crc kubenswrapper[4820]: E0203 12:06:03.142473 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:03 crc kubenswrapper[4820]: E0203 12:06:03.142601 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:03 crc kubenswrapper[4820]: E0203 12:06:03.142697 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.156299 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.159985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.160031 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.160041 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.160056 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.160068 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.172581 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dac06e9d9db1822ef050113f787ff46db678f4916bc9817ac03f61a509a61b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.185743 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p5mx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe0bc53e-6abb-4194-ae3d-109a4fd80372\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eca081cbf8145867bf5df6d6bacdb9746e546e54aa81874fbfc2927958bc1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrchn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p5mx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.198956 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c02def6-29f2-448e-80ec-0c8ee290f053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1643e498a576d565172492302a641bc81eae658b1611df707da1d833ea2c84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xjx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qj7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.215008 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkfwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6da6dd5-2847-482b-adc1-d82ead0af3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7340b2bec2b0d79865501f1917315f355972bcfc92d098978899c57df3f93454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"message\\\":\\\"2026-02-03T12:05:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_496e5650-f16a-459b-af6d-b3ce3817a4a3\\\\n2026-02-03T12:05:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_496e5650-f16a-459b-af6d-b3ce3817a4a3 to /host/opt/cni/bin/\\\\n2026-02-03T12:05:07Z [verbose] multus-daemon started\\\\n2026-02-03T12:05:07Z [verbose] Readiness Indicator file check\\\\n2026-02-03T12:05:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpcc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkfwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.229451 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d140e30-6304-49be-a1a3-2d6b23f9aef3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0203 12:05:02.811560 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0203 12:05:02.811703 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:02.814694 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622958325/tls.crt::/tmp/serving-cert-3622958325/tls.key\\\\\\\"\\\\nI0203 12:05:03.097815 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 12:05:03.106823 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 12:05:03.106852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 12:05:03.106874 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 12:05:03.106880 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 12:05:03.113994 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 12:05:03.114027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0203 12:05:03.114019 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0203 12:05:03.114034 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 12:05:03.114043 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 12:05:03.114053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 12:05:03.114057 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 12:05:03.114061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0203 12:05:03.116858 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.243282 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76aa9d47-b80a-4058-8f92-4cdf0c41df48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47438e7dc8d1eec9a8a061c1b6141e39f4c805dfafd1175c07235f7b9425719b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade7b4944a0da0f0d18c5dd5a92ef407d6beac17985f1429d57672ff7e9beade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f489476174f52e5f5d252e3f3f86f65e1593e641204d863bebdaa80b49bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f84ddcad54789a48e27d495da8bf422e2349c09b477421d0323aae520546826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.257228 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a8e1f86d24e8f09d6250cc9003db189b705c0285fbaab40a8f1a4ac4793137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.262395 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.262429 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.262437 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.262452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.262461 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.270985 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.284409 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.306412 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf99e305-aa5b-4171-94f6-1e64f20414dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"message\\\":\\\"ntroller-manager/route-controller-manager for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0203 12:05:59.050731 6902 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0203 12:05:59.050738 6902 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0203 12:05:59.050751 6902 services_controller.go:451] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75mwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.317124 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z8xrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1cca669-281b-4756-8da8-3860684d3410\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09a38521ccb70b0b152fc51267f6d38cca610e2e8c9d12915482118f3a431b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dxwqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z8xrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.338194 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a4e0669-c148-4af8-99eb-1de50da6d574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d6f3a37b440a1db10c04386289033a0deab673f71efbdfba81597baf6ce5e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec48b5ddf675fb07150ebb88962534caead99f26403cf8ab51c17e2c6c09bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c1209e1301b6e55cd4ea8bb06678ee4c7657c163a610f0cc2dd25f2cb3e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e1f7cf1c2120c222793819705a3b465a387c2aaf33fe146fd46d9c76520aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12446f64d4189185cf6fe19d38637e090482ef8da5fb5e68600cc00ba37bc2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb0a6b382c36acc46be9258e93be71317c26777d770cde9098402054303947c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19153b79978b8e8cc53a7ab9cb95a70c9870c6b0babac711bce13809609f6ac5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e75ec4662cf007f64da3ecaaff0cb4c45aa88746ff717403092583356953526\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.348470 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6351e457-e601-4889-853c-560646bc4b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbjk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7vz6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.358865 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d239622d-70ab-4a6c-ba26-e3a78cd0e963\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6033384e8e6a693d5a19d999075b790954463fc62cc0367026e5d2d9f6eb0919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fce1459bb4834de28fc3f237906647ea2cbfd0f5dfa72fcdbe5eaadf8d8260a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fce1459bb4834de28fc3f237906647ea2cbfd0f5dfa72fcdbe5eaadf8d8260a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:04:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.364038 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.364075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.364084 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.364098 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.364109 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.371755 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cabb5864-1cc1-4b08-aed3-4eeee9e85bf9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d070e438f7f207c929ca6d110046fa8d320d1a7d97ef22a0e5e1372ff626729d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb5d088ef2ec1b90ef2d7af53d61c2789516bf4ef02e9ef06e1dc0c93326dcc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2783e3045fb1f1e8da75e1a83f88bc0f24ce47eab79fb2b31b794c10f201588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:04:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:04:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.382440 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd1daf12ccc77f2658cac173f36c9af6a1c64072fc6bf13302a460f71f05e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5236ae101bb18d9bd1b8d8ac09705f96e5fd78d742e5174e88072570a7448ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.398034 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d93ec7bc-4029-44a4-894d-03eff1388683\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c2ef0f942d970aadc1837e5ab6eac2bdee9ab81c7b852514fc142796328a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b171b1b37fa4899684005dd60f249e4cdc2ac58cb12658abe09d6d17007f3002\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0a45a325e96479b1bb5913e1a78221c3b88c62fc434441e03d6b2e9ed476700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a25830d512e5d0dbbfa4775f85af51e80915adca718997b34e926220ff9377d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff3da1549bd61df7cc473a1a0a17f147b940c3f88f0b72828357ea7982406088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a1698d43c6b74e8ebb9ff13da372c4025ac2637c1f5a0376e4a998e7bf20359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8af3494a09f6242fa8fab8cba8b3db5476625376c0f8f382e2140ca27ef329f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6z8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b5qz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.410684 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8005fd9-8efc-4707-a3dd-60cd20607d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9020e42fc6a5c6c681b1854e93cc1e88f44b7582e28fe1ee40900bb2d39b4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0333755fe31b49cea4cbfef987450b5800b3330c65cf84070036eea4194c13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xg9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8bbpp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.465784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.465838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.465849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.465862 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.465871 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.568517 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.568554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.568567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.568584 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.568595 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.670592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.670639 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.670655 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.670674 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.670686 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.773072 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.773123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.773141 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.773197 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.773211 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.876499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.876537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.876548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.876564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.876577 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.979081 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.979131 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.979153 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.979174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4820]: I0203 12:06:03.979190 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.081113 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.081178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.081190 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.081205 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.081222 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.137614 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:04:19.90220162 +0000 UTC Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.142001 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:04 crc kubenswrapper[4820]: E0203 12:06:04.142321 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.183399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.183472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.183495 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.183524 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.183548 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.286835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.286956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.286990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.287015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.287032 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.388970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.389386 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.389537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.389699 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.389827 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.492423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.492460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.492485 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.492508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.492523 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.595673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.595713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.595749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.595767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.595779 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.698860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.698913 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.698925 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.698938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.698947 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.801116 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.801326 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.801433 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.801531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.801622 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.904629 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.904702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.904719 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.904744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4820]: I0203 12:06:04.904761 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.008786 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.008842 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.008855 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.008872 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.008959 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.112346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.112386 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.112398 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.112414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.112426 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.138225 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 23:38:20.931443612 +0000 UTC Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.141702 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.141787 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:05 crc kubenswrapper[4820]: E0203 12:06:05.141813 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.141954 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:05 crc kubenswrapper[4820]: E0203 12:06:05.142047 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:05 crc kubenswrapper[4820]: E0203 12:06:05.142268 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.215599 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.215678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.215700 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.215730 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.215751 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.318697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.318745 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.318758 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.318775 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.318788 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.421320 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.421372 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.421385 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.421405 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.421421 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.557834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.557879 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.557903 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.557920 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.557930 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.660579 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.660627 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.660640 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.660660 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.660677 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.763159 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.763274 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.763285 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.763301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.763312 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.865636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.865665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.865672 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.865684 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.865692 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.968520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.968565 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.968581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.968600 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4820]: I0203 12:06:05.968613 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.073645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.073681 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.073691 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.073708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.073722 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.138938 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:55:03.121375973 +0000 UTC Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.142414 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:06 crc kubenswrapper[4820]: E0203 12:06:06.142607 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.176696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.176749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.176761 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.176779 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.176790 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.279867 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.279971 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.279994 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.280024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.280046 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.383396 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.383449 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.383506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.383533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.383556 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.485980 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.486033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.486046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.486062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.486076 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.588477 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.588517 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.588526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.588540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.588550 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.691243 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.691320 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.691339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.691369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.691387 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.793375 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.793411 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.793421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.793434 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.793444 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.808921 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:06:06 crc kubenswrapper[4820]: E0203 12:06:06.809081 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:10.809056588 +0000 UTC m=+148.332132462 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.896624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.896678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.896688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.896711 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.896723 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.910832 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.910951 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.911001 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:06 crc kubenswrapper[4820]: I0203 12:06:06.911057 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:06 crc kubenswrapper[4820]: E0203 12:06:06.911189 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:06:06 crc kubenswrapper[4820]: E0203 12:06:06.911230 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:06:06 crc kubenswrapper[4820]: E0203 12:06:06.911186 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:06:06 crc kubenswrapper[4820]: E0203 12:06:06.911312 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:10.911287428 +0000 UTC m=+148.434363302 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:06:06 crc kubenswrapper[4820]: E0203 12:06:06.911362 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:10.91134989 +0000 UTC m=+148.434425764 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:06:06 crc kubenswrapper[4820]: E0203 12:06:06.911339 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:06:06 crc kubenswrapper[4820]: E0203 12:06:06.911417 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:06 crc kubenswrapper[4820]: E0203 12:06:06.911457 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:10.911446413 +0000 UTC m=+148.434522277 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:06 crc kubenswrapper[4820]: E0203 12:06:06.911186 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:06:06 crc kubenswrapper[4820]: E0203 12:06:06.911485 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:06:06 crc kubenswrapper[4820]: E0203 12:06:06.911505 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:06 crc kubenswrapper[4820]: E0203 12:06:06.911562 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:10.911538835 +0000 UTC m=+148.434614699 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.000009 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.000042 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.000052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.000069 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.000082 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.102202 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.102247 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.102255 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.102268 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.102277 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.139434 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:25:33.097487821 +0000 UTC Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.141750 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.141822 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:07 crc kubenswrapper[4820]: E0203 12:06:07.141879 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.141919 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:07 crc kubenswrapper[4820]: E0203 12:06:07.141961 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:07 crc kubenswrapper[4820]: E0203 12:06:07.142020 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.204541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.204578 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.204590 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.204607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.204618 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.307065 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.307113 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.307124 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.307141 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.307153 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.409357 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.409419 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.409436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.409458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.409473 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.512383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.512478 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.512487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.512501 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.512510 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.615663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.615699 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.615710 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.615725 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.615733 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.718442 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.718501 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.718515 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.718535 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.718551 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.822178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.822218 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.822227 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.822240 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.822250 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.924339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.924382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.924399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.924414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4820]: I0203 12:06:07.924424 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.026561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.026632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.026655 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.026687 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.026708 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.128881 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.129166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.129175 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.129188 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.129198 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.139593 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 17:06:24.552312946 +0000 UTC Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.141522 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:08 crc kubenswrapper[4820]: E0203 12:06:08.141674 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.231959 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.232002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.232014 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.232033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.232045 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.334215 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.334290 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.334314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.334343 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.334369 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.436806 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.436863 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.436880 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.436945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.436961 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.540942 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.541002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.541025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.541048 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.541066 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.645613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.645670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.645687 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.645711 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.645728 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.752399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.752478 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.752503 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.752535 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.752598 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.855494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.855557 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.855572 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.855597 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.855616 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.959216 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.959295 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.959318 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.959345 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4820]: I0203 12:06:08.959366 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.061689 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.061735 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.061752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.061774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.061790 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.139799 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:00:43.460039209 +0000 UTC Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.142107 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:09 crc kubenswrapper[4820]: E0203 12:06:09.142329 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.142380 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.142128 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:09 crc kubenswrapper[4820]: E0203 12:06:09.142960 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:09 crc kubenswrapper[4820]: E0203 12:06:09.143041 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.164101 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.164142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.164163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.164191 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.164211 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.268072 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.268129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.268148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.268184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.268221 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.372000 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.372066 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.372091 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.372121 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.372139 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.474249 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.474733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.474870 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.475073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.475203 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.578287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.578351 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.578374 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.578398 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.578416 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.681403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.681663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.681739 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.681869 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.681952 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.784499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.784550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.784564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.784582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.784594 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.886838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.886878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.886906 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.886921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.886929 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.989691 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.989744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.989756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.989772 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4820]: I0203 12:06:09.989783 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.092309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.092367 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.092383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.092403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.092420 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.140990 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:00:35.865732196 +0000 UTC Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.142181 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:10 crc kubenswrapper[4820]: E0203 12:06:10.142296 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.194880 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.194935 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.194963 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.194978 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.194989 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.213504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.213561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.213575 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.213593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.213609 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4820]: E0203 12:06:10.226530 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.231221 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.231349 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.231388 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.231419 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.231441 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4820]: E0203 12:06:10.243960 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.247774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.247815 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.247832 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.247856 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.247870 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4820]: E0203 12:06:10.263369 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.267754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.267802 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.267814 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.267831 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.267845 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4820]: E0203 12:06:10.282290 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.286199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.286263 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.286283 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.286305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.286322 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4820]: E0203 12:06:10.299425 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"83c4bcff-fd36-4e8a-96f0-3320ea01106a\\\",\\\"systemUUID\\\":\\\"a4221fcb-5776-4539-8cb5-9da3bff4d7a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:10Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:10 crc kubenswrapper[4820]: E0203 12:06:10.299572 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.301362 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.301410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.301421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.301439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.301451 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.404009 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.404074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.404097 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.404128 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.404152 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.507086 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.507323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.507403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.507506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.507671 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.611212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.611243 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.611250 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.611262 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.611272 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.713717 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.713761 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.713771 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.713787 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.713798 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.816000 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.816042 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.816150 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.816183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.816199 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.918766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.918839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.918863 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.918927 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4820]: I0203 12:06:10.918967 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.020931 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.020967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.021000 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.021012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.021021 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.124279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.124326 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.124338 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.124358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.124389 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.141711 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 11:00:44.515487163 +0000 UTC Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.141886 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.141973 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.142210 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:11 crc kubenswrapper[4820]: E0203 12:06:11.142315 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:11 crc kubenswrapper[4820]: E0203 12:06:11.142420 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:11 crc kubenswrapper[4820]: E0203 12:06:11.142521 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.227472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.227513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.227525 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.227540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.227551 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.331051 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.331119 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.331142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.331169 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.331189 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.433755 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.433834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.433845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.433859 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.433870 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.536082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.536123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.536133 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.536147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.536162 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.639210 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.639251 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.639266 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.639287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.639302 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.742780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.742825 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.742834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.742848 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.742858 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.844990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.845036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.845052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.845071 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.845084 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.947091 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.947653 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.947781 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.947904 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4820]: I0203 12:06:11.948003 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.050615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.050663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.050675 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.050693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.050707 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.142088 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.142075 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 10:45:27.04564437 +0000 UTC Feb 03 12:06:12 crc kubenswrapper[4820]: E0203 12:06:12.142296 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.153166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.153206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.153216 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.153233 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.153243 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.256048 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.256120 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.256215 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.256248 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.256267 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.359929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.359984 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.359999 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.360017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.360032 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.463384 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.463458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.463481 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.463510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.463532 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.566188 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.566228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.566240 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.566257 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.566270 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.669047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.669113 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.669135 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.669164 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.669187 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.772364 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.772404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.772416 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.772430 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.772440 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.875433 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.875494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.875517 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.875543 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.875561 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.978788 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.978867 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.978925 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.979008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4820]: I0203 12:06:12.979026 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.081842 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.081946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.081961 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.082002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.082016 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.141849 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.141944 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:13 crc kubenswrapper[4820]: E0203 12:06:13.142014 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.141849 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.142438 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:21:33.85693252 +0000 UTC Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.142665 4820 scope.go:117] "RemoveContainer" containerID="9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe" Feb 03 12:06:13 crc kubenswrapper[4820]: E0203 12:06:13.142672 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:13 crc kubenswrapper[4820]: E0203 12:06:13.142824 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" Feb 03 12:06:13 crc kubenswrapper[4820]: E0203 12:06:13.143074 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.181571 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=71.181552754 podStartE2EDuration="1m11.181552754s" podCreationTimestamp="2026-02-03 12:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:06:13.180107004 +0000 UTC m=+90.703182888" watchObservedRunningTime="2026-02-03 12:06:13.181552754 +0000 UTC m=+90.704628618" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.184639 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.184695 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.184705 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.184746 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.184758 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.197865 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=11.197846395 podStartE2EDuration="11.197846395s" podCreationTimestamp="2026-02-03 12:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:06:13.197822145 +0000 UTC m=+90.720898019" watchObservedRunningTime="2026-02-03 12:06:13.197846395 +0000 UTC m=+90.720922259" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.211845 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=67.211824632 podStartE2EDuration="1m7.211824632s" podCreationTimestamp="2026-02-03 12:05:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:06:13.21172949 +0000 UTC m=+90.734805374" watchObservedRunningTime="2026-02-03 12:06:13.211824632 +0000 UTC m=+90.734900496" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.250619 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-b5qz9" podStartSLOduration=69.250600656 podStartE2EDuration="1m9.250600656s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:06:13.249243418 +0000 UTC m=+90.772319292" watchObservedRunningTime="2026-02-03 12:06:13.250600656 +0000 UTC m=+90.773676510" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.275545 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8bbpp" podStartSLOduration=69.275523146 podStartE2EDuration="1m9.275523146s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:06:13.262727221 +0000 UTC m=+90.785803085" watchObservedRunningTime="2026-02-03 12:06:13.275523146 +0000 UTC m=+90.798599020" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.288471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.288519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.288528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.288545 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.288557 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.338047 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.338023686 podStartE2EDuration="1m10.338023686s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:06:13.324333146 +0000 UTC m=+90.847409010" watchObservedRunningTime="2026-02-03 12:06:13.338023686 +0000 UTC m=+90.861099550" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.338269 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.338263592 podStartE2EDuration="43.338263592s" podCreationTimestamp="2026-02-03 12:05:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:06:13.337771899 +0000 UTC m=+90.860847783" watchObservedRunningTime="2026-02-03 12:06:13.338263592 +0000 UTC m=+90.861339466" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.390366 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.390427 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.390439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.390454 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.390482 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.393817 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-p5mx8" podStartSLOduration=70.3937986 podStartE2EDuration="1m10.3937986s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:06:13.393330527 +0000 UTC m=+90.916406391" watchObservedRunningTime="2026-02-03 12:06:13.3937986 +0000 UTC m=+90.916874464" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.422342 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podStartSLOduration=70.422323599 podStartE2EDuration="1m10.422323599s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:06:13.410228235 +0000 UTC m=+90.933304099" watchObservedRunningTime="2026-02-03 12:06:13.422323599 +0000 UTC m=+90.945399463" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.422439 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dkfwm" podStartSLOduration=69.422436442 podStartE2EDuration="1m9.422436442s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:06:13.422021811 +0000 UTC m=+90.945097675" watchObservedRunningTime="2026-02-03 12:06:13.422436442 +0000 UTC m=+90.945512306" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.457069 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-z8xrk" podStartSLOduration=70.45703775 podStartE2EDuration="1m10.45703775s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:06:13.456449324 +0000 UTC m=+90.979525188" watchObservedRunningTime="2026-02-03 12:06:13.45703775 +0000 UTC m=+90.980113614" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.493090 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.493136 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.493144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.493159 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.493169 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.595817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.595879 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.595924 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.595948 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.595965 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.701938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.701985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.702000 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.702018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.702029 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.804920 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.804966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.804976 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.804992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.805006 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.907733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.907781 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.907793 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.907808 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4820]: I0203 12:06:13.907820 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.010043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.010083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.010092 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.010104 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.010114 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.112142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.113011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.113038 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.113057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.113069 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.141806 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:14 crc kubenswrapper[4820]: E0203 12:06:14.141965 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.142900 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 12:44:53.563105005 +0000 UTC Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.215359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.215411 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.215427 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.215449 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.215465 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.319104 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.319181 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.319194 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.319212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.319223 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.422015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.422050 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.422059 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.422072 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.422083 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.524269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.524319 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.524331 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.524348 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.524359 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.627588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.627641 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.627656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.627676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.627692 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.731136 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.731183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.731191 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.731206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.731215 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.834330 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.834566 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.834669 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.834753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.834816 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.937943 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.938301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.938439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.938581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4820]: I0203 12:06:14.938749 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.041752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.041786 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.041797 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.041813 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.041824 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.142203 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.142254 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:15 crc kubenswrapper[4820]: E0203 12:06:15.142325 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.142369 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:15 crc kubenswrapper[4820]: E0203 12:06:15.142393 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:15 crc kubenswrapper[4820]: E0203 12:06:15.142509 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.143059 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:08:53.210339948 +0000 UTC Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.144385 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.144414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.144425 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.144441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.144453 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.247620 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.247668 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.247682 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.247705 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.247719 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.350150 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.350186 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.350197 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.350213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.350224 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.452209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.452235 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.452243 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.452257 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.452266 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.555060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.555157 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.555210 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.555234 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.555252 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.657156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.657201 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.657217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.657235 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.657247 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.759577 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.759610 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.759620 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.759634 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.759645 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.861785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.861825 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.861842 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.861858 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.861869 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.964707 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.964767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.964792 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.964815 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4820]: I0203 12:06:15.964829 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.068093 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.068151 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.068174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.068219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.068245 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.142546 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:16 crc kubenswrapper[4820]: E0203 12:06:16.142734 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.143494 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:47:56.137172384 +0000 UTC Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.171840 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.171970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.172035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.172060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.172078 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.273756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.273813 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.273827 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.273846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.273861 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.376184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.376245 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.376261 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.376278 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.376288 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.478970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.479009 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.479020 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.479036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.479047 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.581696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.581742 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.581760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.581776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.581788 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.684592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.684645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.684658 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.684676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.684688 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.787490 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.787559 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.787578 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.787636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.787656 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.896027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.896069 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.896080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.896098 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.896109 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.998675 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.998936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.998946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.998962 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4820]: I0203 12:06:16.998972 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.101877 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.101942 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.101950 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.101964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.101980 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.142100 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.142176 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:17 crc kubenswrapper[4820]: E0203 12:06:17.142235 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:17 crc kubenswrapper[4820]: E0203 12:06:17.142310 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.142362 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:17 crc kubenswrapper[4820]: E0203 12:06:17.142416 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.143756 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:13:56.190574497 +0000 UTC Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.204519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.204613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.204632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.204654 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.204677 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.308156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.308214 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.308231 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.308253 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.308270 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.411724 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.411788 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.411815 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.411846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.411870 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.514710 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.514777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.514789 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.514812 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.514835 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.616335 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.616363 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.616370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.616384 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.616393 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.718415 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.718453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.718461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.718474 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.718484 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.821492 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.821542 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.821556 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.821573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.821585 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.923680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.923730 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.923742 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.923761 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4820]: I0203 12:06:17.923774 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.026615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.026679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.026691 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.026709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.026721 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.129510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.129565 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.129582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.129603 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.129617 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.142299 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:18 crc kubenswrapper[4820]: E0203 12:06:18.142578 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.144160 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 16:00:30.426285719 +0000 UTC Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.232464 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.232519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.232529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.232550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.232565 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.335246 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.335274 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.335282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.335295 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.335304 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.438507 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.438610 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.438634 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.438658 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.438676 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.540866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.540965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.540980 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.541003 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.541018 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.643673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.643710 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.643720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.643733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.643746 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.746160 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.746196 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.746204 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.746216 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.746225 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.848440 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.848485 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.848496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.848513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.848525 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.951387 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.951448 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.951460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.951473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4820]: I0203 12:06:18.951482 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.053665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.053709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.053721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.053740 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.053752 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.142401 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.142483 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:19 crc kubenswrapper[4820]: E0203 12:06:19.142543 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.142577 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:19 crc kubenswrapper[4820]: E0203 12:06:19.142622 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:19 crc kubenswrapper[4820]: E0203 12:06:19.142712 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.144278 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:12:36.925181468 +0000 UTC Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.155777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.155798 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.155806 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.155843 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.155854 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.259065 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.259369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.259453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.259555 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.259656 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.362609 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.362651 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.362661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.362674 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.362684 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.465865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.465918 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.465929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.465943 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.465952 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.568405 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.568448 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.568460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.568475 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.568486 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.670944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.671262 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.671411 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.671553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.671639 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.774268 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.774319 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.774333 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.774351 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.774366 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.876855 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.876921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.876941 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.876957 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.876967 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.979383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.979425 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.979437 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.979452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4820]: I0203 12:06:19.979464 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.081406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.081442 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.081454 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.081468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.081478 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.142271 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:20 crc kubenswrapper[4820]: E0203 12:06:20.142435 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.144436 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 06:41:54.41010637 +0000 UTC Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.183340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.183403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.183414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.183430 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.183441 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.286102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.286134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.286142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.286155 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.286163 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.389350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.389426 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.389455 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.389481 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.389498 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.492747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.492794 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.492808 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.492824 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.492835 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.548123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.548165 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.548179 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.548198 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.548229 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.587206 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5"] Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.587640 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.589756 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.590002 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.590320 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.590538 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.651999 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e02a2ee3-c171-44cf-897e-d7c694954d90-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.652059 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e02a2ee3-c171-44cf-897e-d7c694954d90-service-ca\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.652091 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e02a2ee3-c171-44cf-897e-d7c694954d90-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.652126 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e02a2ee3-c171-44cf-897e-d7c694954d90-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.652182 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e02a2ee3-c171-44cf-897e-d7c694954d90-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.753297 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e02a2ee3-c171-44cf-897e-d7c694954d90-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.753390 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e02a2ee3-c171-44cf-897e-d7c694954d90-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.753417 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e02a2ee3-c171-44cf-897e-d7c694954d90-service-ca\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.753447 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e02a2ee3-c171-44cf-897e-d7c694954d90-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.753468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e02a2ee3-c171-44cf-897e-d7c694954d90-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.753499 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e02a2ee3-c171-44cf-897e-d7c694954d90-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.753593 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e02a2ee3-c171-44cf-897e-d7c694954d90-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.755140 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e02a2ee3-c171-44cf-897e-d7c694954d90-service-ca\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.760611 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e02a2ee3-c171-44cf-897e-d7c694954d90-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.770796 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e02a2ee3-c171-44cf-897e-d7c694954d90-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-srtw5\" (UID: \"e02a2ee3-c171-44cf-897e-d7c694954d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:20 crc kubenswrapper[4820]: I0203 12:06:20.904722 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" Feb 03 12:06:21 crc kubenswrapper[4820]: I0203 12:06:21.142453 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:21 crc kubenswrapper[4820]: E0203 12:06:21.142863 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:21 crc kubenswrapper[4820]: I0203 12:06:21.142563 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:21 crc kubenswrapper[4820]: I0203 12:06:21.142492 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:21 crc kubenswrapper[4820]: E0203 12:06:21.142980 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:21 crc kubenswrapper[4820]: E0203 12:06:21.143038 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:21 crc kubenswrapper[4820]: I0203 12:06:21.144712 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:01:12.857456542 +0000 UTC Feb 03 12:06:21 crc kubenswrapper[4820]: I0203 12:06:21.144764 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 03 12:06:21 crc kubenswrapper[4820]: I0203 12:06:21.151882 4820 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 03 12:06:21 crc kubenswrapper[4820]: I0203 12:06:21.629880 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" event={"ID":"e02a2ee3-c171-44cf-897e-d7c694954d90","Type":"ContainerStarted","Data":"b1656a6f3aa55a4b45ce494844db260419f4ca4729581119cf240d52e805e3fd"} Feb 03 12:06:21 crc kubenswrapper[4820]: I0203 12:06:21.629944 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" event={"ID":"e02a2ee3-c171-44cf-897e-d7c694954d90","Type":"ContainerStarted","Data":"b5674daf767eaa73e79bf758548efc557dd4a6c6bf80a2ec38f90418232b8768"} Feb 03 12:06:21 crc kubenswrapper[4820]: I0203 12:06:21.642242 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srtw5" podStartSLOduration=78.642227425 podStartE2EDuration="1m18.642227425s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:06:21.641711371 +0000 UTC m=+99.164787245" watchObservedRunningTime="2026-02-03 12:06:21.642227425 +0000 UTC m=+99.165303289" Feb 03 12:06:21 crc kubenswrapper[4820]: I0203 12:06:21.966687 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs\") pod \"network-metrics-daemon-7vz6k\" (UID: \"6351e457-e601-4889-853c-560646bc4b43\") " pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:21 crc kubenswrapper[4820]: E0203 12:06:21.966857 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:21 crc kubenswrapper[4820]: E0203 12:06:21.966931 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs podName:6351e457-e601-4889-853c-560646bc4b43 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:25.966916543 +0000 UTC m=+163.489992397 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs") pod "network-metrics-daemon-7vz6k" (UID: "6351e457-e601-4889-853c-560646bc4b43") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:22 crc kubenswrapper[4820]: I0203 12:06:22.142109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:22 crc kubenswrapper[4820]: E0203 12:06:22.142248 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:23 crc kubenswrapper[4820]: I0203 12:06:23.169042 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:23 crc kubenswrapper[4820]: I0203 12:06:23.169073 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:23 crc kubenswrapper[4820]: E0203 12:06:23.169185 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:23 crc kubenswrapper[4820]: I0203 12:06:23.169252 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:23 crc kubenswrapper[4820]: E0203 12:06:23.169372 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:23 crc kubenswrapper[4820]: E0203 12:06:23.169596 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:24 crc kubenswrapper[4820]: I0203 12:06:24.141959 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:24 crc kubenswrapper[4820]: E0203 12:06:24.142244 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:25 crc kubenswrapper[4820]: I0203 12:06:25.141810 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:25 crc kubenswrapper[4820]: I0203 12:06:25.141810 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:25 crc kubenswrapper[4820]: I0203 12:06:25.142479 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:25 crc kubenswrapper[4820]: E0203 12:06:25.142760 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:25 crc kubenswrapper[4820]: E0203 12:06:25.142776 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:25 crc kubenswrapper[4820]: E0203 12:06:25.143021 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:26 crc kubenswrapper[4820]: I0203 12:06:26.142363 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:26 crc kubenswrapper[4820]: E0203 12:06:26.142524 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:26 crc kubenswrapper[4820]: I0203 12:06:26.143143 4820 scope.go:117] "RemoveContainer" containerID="9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe" Feb 03 12:06:26 crc kubenswrapper[4820]: E0203 12:06:26.143348 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" Feb 03 12:06:27 crc kubenswrapper[4820]: I0203 12:06:27.142472 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:27 crc kubenswrapper[4820]: I0203 12:06:27.142563 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:27 crc kubenswrapper[4820]: E0203 12:06:27.142614 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:27 crc kubenswrapper[4820]: I0203 12:06:27.142665 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:27 crc kubenswrapper[4820]: E0203 12:06:27.142743 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:27 crc kubenswrapper[4820]: E0203 12:06:27.143437 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:28 crc kubenswrapper[4820]: I0203 12:06:28.142564 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:28 crc kubenswrapper[4820]: E0203 12:06:28.142697 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:29 crc kubenswrapper[4820]: I0203 12:06:29.142137 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:29 crc kubenswrapper[4820]: I0203 12:06:29.142226 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:29 crc kubenswrapper[4820]: E0203 12:06:29.142266 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:29 crc kubenswrapper[4820]: E0203 12:06:29.142380 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:29 crc kubenswrapper[4820]: I0203 12:06:29.142522 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:29 crc kubenswrapper[4820]: E0203 12:06:29.142571 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:30 crc kubenswrapper[4820]: I0203 12:06:30.142204 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:30 crc kubenswrapper[4820]: E0203 12:06:30.142419 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:31 crc kubenswrapper[4820]: I0203 12:06:31.142464 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:31 crc kubenswrapper[4820]: E0203 12:06:31.142592 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:31 crc kubenswrapper[4820]: I0203 12:06:31.142646 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:31 crc kubenswrapper[4820]: I0203 12:06:31.142669 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:31 crc kubenswrapper[4820]: E0203 12:06:31.142795 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:31 crc kubenswrapper[4820]: E0203 12:06:31.143001 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:32 crc kubenswrapper[4820]: I0203 12:06:32.142549 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:32 crc kubenswrapper[4820]: E0203 12:06:32.142836 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:33 crc kubenswrapper[4820]: I0203 12:06:33.141961 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:33 crc kubenswrapper[4820]: I0203 12:06:33.142069 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:33 crc kubenswrapper[4820]: I0203 12:06:33.141874 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:33 crc kubenswrapper[4820]: E0203 12:06:33.143487 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:33 crc kubenswrapper[4820]: E0203 12:06:33.144324 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:33 crc kubenswrapper[4820]: E0203 12:06:33.144419 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:34 crc kubenswrapper[4820]: I0203 12:06:34.142201 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:34 crc kubenswrapper[4820]: E0203 12:06:34.142562 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:35 crc kubenswrapper[4820]: I0203 12:06:35.142106 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:35 crc kubenswrapper[4820]: I0203 12:06:35.142128 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:35 crc kubenswrapper[4820]: I0203 12:06:35.142217 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:35 crc kubenswrapper[4820]: E0203 12:06:35.142313 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:35 crc kubenswrapper[4820]: E0203 12:06:35.142412 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:35 crc kubenswrapper[4820]: E0203 12:06:35.142661 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:36 crc kubenswrapper[4820]: I0203 12:06:36.141780 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:36 crc kubenswrapper[4820]: E0203 12:06:36.142086 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:37 crc kubenswrapper[4820]: I0203 12:06:37.142277 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:37 crc kubenswrapper[4820]: E0203 12:06:37.142378 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:37 crc kubenswrapper[4820]: I0203 12:06:37.142391 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:37 crc kubenswrapper[4820]: I0203 12:06:37.142440 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:37 crc kubenswrapper[4820]: E0203 12:06:37.142621 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:37 crc kubenswrapper[4820]: E0203 12:06:37.142651 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:38 crc kubenswrapper[4820]: I0203 12:06:38.142361 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:38 crc kubenswrapper[4820]: E0203 12:06:38.142807 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:38 crc kubenswrapper[4820]: I0203 12:06:38.143167 4820 scope.go:117] "RemoveContainer" containerID="9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe" Feb 03 12:06:38 crc kubenswrapper[4820]: E0203 12:06:38.143356 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-75mwm_openshift-ovn-kubernetes(cf99e305-aa5b-4171-94f6-1e64f20414dd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" Feb 03 12:06:38 crc kubenswrapper[4820]: I0203 12:06:38.691059 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkfwm_c6da6dd5-2847-482b-adc1-d82ead0af3e9/kube-multus/1.log" Feb 03 12:06:38 crc kubenswrapper[4820]: I0203 12:06:38.691469 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkfwm_c6da6dd5-2847-482b-adc1-d82ead0af3e9/kube-multus/0.log" Feb 03 12:06:38 crc kubenswrapper[4820]: I0203 12:06:38.691503 4820 generic.go:334] "Generic (PLEG): container finished" podID="c6da6dd5-2847-482b-adc1-d82ead0af3e9" containerID="7340b2bec2b0d79865501f1917315f355972bcfc92d098978899c57df3f93454" exitCode=1 Feb 03 12:06:38 crc kubenswrapper[4820]: I0203 12:06:38.691532 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkfwm" event={"ID":"c6da6dd5-2847-482b-adc1-d82ead0af3e9","Type":"ContainerDied","Data":"7340b2bec2b0d79865501f1917315f355972bcfc92d098978899c57df3f93454"} Feb 03 12:06:38 crc kubenswrapper[4820]: I0203 12:06:38.691580 4820 scope.go:117] "RemoveContainer" containerID="b50e93f1825fc82dc2e0fc70a417d2e4412db367319656bfcea8fa9daf9fe152" Feb 03 12:06:38 crc kubenswrapper[4820]: I0203 12:06:38.692277 4820 scope.go:117] "RemoveContainer" containerID="7340b2bec2b0d79865501f1917315f355972bcfc92d098978899c57df3f93454" Feb 03 12:06:38 crc kubenswrapper[4820]: E0203 12:06:38.692449 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-dkfwm_openshift-multus(c6da6dd5-2847-482b-adc1-d82ead0af3e9)\"" pod="openshift-multus/multus-dkfwm" podUID="c6da6dd5-2847-482b-adc1-d82ead0af3e9" Feb 03 12:06:39 crc kubenswrapper[4820]: I0203 12:06:39.143119 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:39 crc kubenswrapper[4820]: I0203 12:06:39.143240 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:39 crc kubenswrapper[4820]: E0203 12:06:39.143277 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:39 crc kubenswrapper[4820]: E0203 12:06:39.143470 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:39 crc kubenswrapper[4820]: I0203 12:06:39.143500 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:39 crc kubenswrapper[4820]: E0203 12:06:39.143638 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:39 crc kubenswrapper[4820]: I0203 12:06:39.696687 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkfwm_c6da6dd5-2847-482b-adc1-d82ead0af3e9/kube-multus/1.log" Feb 03 12:06:40 crc kubenswrapper[4820]: I0203 12:06:40.141620 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:40 crc kubenswrapper[4820]: E0203 12:06:40.141834 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:41 crc kubenswrapper[4820]: I0203 12:06:41.142064 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:41 crc kubenswrapper[4820]: E0203 12:06:41.142834 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:41 crc kubenswrapper[4820]: I0203 12:06:41.142334 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:41 crc kubenswrapper[4820]: E0203 12:06:41.143107 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:41 crc kubenswrapper[4820]: I0203 12:06:41.142318 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:41 crc kubenswrapper[4820]: E0203 12:06:41.143318 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:42 crc kubenswrapper[4820]: I0203 12:06:42.142318 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:42 crc kubenswrapper[4820]: E0203 12:06:42.142513 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:43 crc kubenswrapper[4820]: E0203 12:06:43.113192 4820 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 03 12:06:43 crc kubenswrapper[4820]: I0203 12:06:43.142167 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:43 crc kubenswrapper[4820]: I0203 12:06:43.142241 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:43 crc kubenswrapper[4820]: E0203 12:06:43.142276 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:43 crc kubenswrapper[4820]: E0203 12:06:43.142415 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:43 crc kubenswrapper[4820]: I0203 12:06:43.142189 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:43 crc kubenswrapper[4820]: E0203 12:06:43.142532 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:43 crc kubenswrapper[4820]: E0203 12:06:43.218574 4820 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 12:06:44 crc kubenswrapper[4820]: I0203 12:06:44.142062 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:44 crc kubenswrapper[4820]: E0203 12:06:44.142228 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:45 crc kubenswrapper[4820]: I0203 12:06:45.142519 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:45 crc kubenswrapper[4820]: I0203 12:06:45.142552 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:45 crc kubenswrapper[4820]: I0203 12:06:45.142642 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:45 crc kubenswrapper[4820]: E0203 12:06:45.142719 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:45 crc kubenswrapper[4820]: E0203 12:06:45.142830 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:45 crc kubenswrapper[4820]: E0203 12:06:45.142983 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:46 crc kubenswrapper[4820]: I0203 12:06:46.142263 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:46 crc kubenswrapper[4820]: E0203 12:06:46.142445 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:47 crc kubenswrapper[4820]: I0203 12:06:47.141873 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:47 crc kubenswrapper[4820]: I0203 12:06:47.141953 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:47 crc kubenswrapper[4820]: I0203 12:06:47.141921 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:47 crc kubenswrapper[4820]: E0203 12:06:47.142115 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:47 crc kubenswrapper[4820]: E0203 12:06:47.142020 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:47 crc kubenswrapper[4820]: E0203 12:06:47.142210 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:48 crc kubenswrapper[4820]: I0203 12:06:48.142187 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:48 crc kubenswrapper[4820]: E0203 12:06:48.142439 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:48 crc kubenswrapper[4820]: E0203 12:06:48.219346 4820 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 12:06:49 crc kubenswrapper[4820]: I0203 12:06:49.142290 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:49 crc kubenswrapper[4820]: I0203 12:06:49.142333 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:49 crc kubenswrapper[4820]: I0203 12:06:49.142686 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:49 crc kubenswrapper[4820]: E0203 12:06:49.142809 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:49 crc kubenswrapper[4820]: E0203 12:06:49.142992 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:49 crc kubenswrapper[4820]: E0203 12:06:49.143148 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:49 crc kubenswrapper[4820]: I0203 12:06:49.143263 4820 scope.go:117] "RemoveContainer" containerID="9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe" Feb 03 12:06:49 crc kubenswrapper[4820]: I0203 12:06:49.742530 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/3.log" Feb 03 12:06:49 crc kubenswrapper[4820]: I0203 12:06:49.745198 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerStarted","Data":"b06b939e5a22068c297bee2452a40d7c0443725bc4b8fdf70cba52f1273121fa"} Feb 03 12:06:49 crc kubenswrapper[4820]: I0203 12:06:49.745695 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:06:49 crc kubenswrapper[4820]: I0203 12:06:49.774107 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podStartSLOduration=105.774089483 podStartE2EDuration="1m45.774089483s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:06:49.773541718 +0000 UTC m=+127.296617592" watchObservedRunningTime="2026-02-03 12:06:49.774089483 +0000 UTC m=+127.297165347" Feb 03 12:06:49 crc kubenswrapper[4820]: I0203 12:06:49.915874 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7vz6k"] Feb 03 12:06:49 crc kubenswrapper[4820]: I0203 12:06:49.916034 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:49 crc kubenswrapper[4820]: E0203 12:06:49.916171 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:51 crc kubenswrapper[4820]: I0203 12:06:51.141703 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:51 crc kubenswrapper[4820]: I0203 12:06:51.141712 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:51 crc kubenswrapper[4820]: I0203 12:06:51.141824 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:51 crc kubenswrapper[4820]: E0203 12:06:51.142207 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:51 crc kubenswrapper[4820]: E0203 12:06:51.142418 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:51 crc kubenswrapper[4820]: I0203 12:06:51.143639 4820 scope.go:117] "RemoveContainer" containerID="7340b2bec2b0d79865501f1917315f355972bcfc92d098978899c57df3f93454" Feb 03 12:06:51 crc kubenswrapper[4820]: E0203 12:06:51.143843 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:51 crc kubenswrapper[4820]: I0203 12:06:51.752972 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkfwm_c6da6dd5-2847-482b-adc1-d82ead0af3e9/kube-multus/1.log" Feb 03 12:06:51 crc kubenswrapper[4820]: I0203 12:06:51.753319 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkfwm" event={"ID":"c6da6dd5-2847-482b-adc1-d82ead0af3e9","Type":"ContainerStarted","Data":"db3333ec20d0d6dca8a643ef39757315542b773403d9de56fef33e73a57332a4"} Feb 03 12:06:52 crc kubenswrapper[4820]: I0203 12:06:52.142171 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:52 crc kubenswrapper[4820]: E0203 12:06:52.142304 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7vz6k" podUID="6351e457-e601-4889-853c-560646bc4b43" Feb 03 12:06:53 crc kubenswrapper[4820]: I0203 12:06:53.142029 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:53 crc kubenswrapper[4820]: I0203 12:06:53.142033 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:53 crc kubenswrapper[4820]: E0203 12:06:53.143342 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:53 crc kubenswrapper[4820]: I0203 12:06:53.143403 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:53 crc kubenswrapper[4820]: E0203 12:06:53.143545 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:53 crc kubenswrapper[4820]: E0203 12:06:53.143592 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:54 crc kubenswrapper[4820]: I0203 12:06:54.142014 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:06:54 crc kubenswrapper[4820]: I0203 12:06:54.144176 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 03 12:06:54 crc kubenswrapper[4820]: I0203 12:06:54.144725 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 03 12:06:55 crc kubenswrapper[4820]: I0203 12:06:55.141937 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:55 crc kubenswrapper[4820]: I0203 12:06:55.141991 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:55 crc kubenswrapper[4820]: I0203 12:06:55.142160 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:55 crc kubenswrapper[4820]: I0203 12:06:55.144163 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 03 12:06:55 crc kubenswrapper[4820]: I0203 12:06:55.144952 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 03 12:06:55 crc kubenswrapper[4820]: I0203 12:06:55.145096 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 03 12:06:55 crc kubenswrapper[4820]: I0203 12:06:55.145295 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.568526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.612155 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lt75x"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.612796 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.613237 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.613658 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.620246 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.620417 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s55v7"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.621399 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.622430 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.622991 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.623230 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.624067 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.626978 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.627147 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.627224 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.627408 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.627458 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.627575 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.627656 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.627587 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.627843 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.628321 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.628960 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.630659 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.630772 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-lnc22"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.631932 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.633446 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sf69z"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.634855 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.638178 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.638776 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.639022 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.639218 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.639619 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.645205 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.645415 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.645745 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.645864 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.645932 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.647152 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.647640 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hxjbf"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.648683 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.648971 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.649083 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.649251 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.649349 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.649384 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.649460 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.649502 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.649309 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.649613 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.649705 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.649768 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.649785 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.649708 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.649743 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.649925 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.650013 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.650072 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.651533 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.652017 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.652130 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.652704 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-tw2nt"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.652927 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.653128 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.653408 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.653875 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.655059 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4gskq"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.655445 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.655547 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z7vmj"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.656102 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.656193 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.656979 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.657274 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.657421 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.657946 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.659030 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.659132 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.659688 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.659765 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.659987 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.659816 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.659767 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.660955 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.661371 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.661624 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.661746 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.662342 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.662347 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.662398 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.664451 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gqwld"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.665081 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gqwld" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.668368 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.669619 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r8785"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.670344 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-r8785" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.672333 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qpxpv"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.672813 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.680168 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.691910 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.692427 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.693302 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vmzb5"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.694554 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.694710 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h22tk"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.694732 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.695136 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.695716 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-client-ca\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.695752 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cdbf888-563c-4590-bfbe-2bbb669e7ddb-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v4cfl\" (UID: \"8cdbf888-563c-4590-bfbe-2bbb669e7ddb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.695781 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-config\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.695811 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pttq\" (UniqueName: \"kubernetes.io/projected/c93c42c7-c9ff-42cc-b604-e36f7a063fcf-kube-api-access-2pttq\") pod \"openshift-config-operator-7777fb866f-lbsmw\" (UID: \"c93c42c7-c9ff-42cc-b604-e36f7a063fcf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.695846 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzjb6\" (UniqueName: \"kubernetes.io/projected/8cdbf888-563c-4590-bfbe-2bbb669e7ddb-kube-api-access-fzjb6\") pod \"openshift-controller-manager-operator-756b6f6bc6-v4cfl\" (UID: \"8cdbf888-563c-4590-bfbe-2bbb669e7ddb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.695875 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52dr7\" (UniqueName: \"kubernetes.io/projected/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-kube-api-access-52dr7\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.695917 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-serving-cert\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.696599 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d2a7e9-1fcb-4213-ae6c-753953bfae1a-config\") pod \"console-operator-58897d9998-sf69z\" (UID: \"29d2a7e9-1fcb-4213-ae6c-753953bfae1a\") " pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.696640 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c93c42c7-c9ff-42cc-b604-e36f7a063fcf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lbsmw\" (UID: \"c93c42c7-c9ff-42cc-b604-e36f7a063fcf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.696678 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r44wn\" (UniqueName: \"kubernetes.io/projected/29d2a7e9-1fcb-4213-ae6c-753953bfae1a-kube-api-access-r44wn\") pod \"console-operator-58897d9998-sf69z\" (UID: \"29d2a7e9-1fcb-4213-ae6c-753953bfae1a\") " pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.696702 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.696718 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29d2a7e9-1fcb-4213-ae6c-753953bfae1a-serving-cert\") pod \"console-operator-58897d9998-sf69z\" (UID: \"29d2a7e9-1fcb-4213-ae6c-753953bfae1a\") " pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.696736 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05797a22-690b-4b36-8b4e-5dcc739f7cad-config\") pod \"route-controller-manager-6576b87f9c-cs8dg\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.696752 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-config\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.696777 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-serving-cert\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.696799 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c93c42c7-c9ff-42cc-b604-e36f7a063fcf-serving-cert\") pod \"openshift-config-operator-7777fb866f-lbsmw\" (UID: \"c93c42c7-c9ff-42cc-b604-e36f7a063fcf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.696816 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05797a22-690b-4b36-8b4e-5dcc739f7cad-client-ca\") pod \"route-controller-manager-6576b87f9c-cs8dg\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.696860 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrzd7\" (UniqueName: \"kubernetes.io/projected/876c5dc3-b775-45cc-94b6-4339735e9975-kube-api-access-hrzd7\") pod \"downloads-7954f5f757-lnc22\" (UID: \"876c5dc3-b775-45cc-94b6-4339735e9975\") " pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.696879 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnw29\" (UniqueName: \"kubernetes.io/projected/05797a22-690b-4b36-8b4e-5dcc739f7cad-kube-api-access-gnw29\") pod \"route-controller-manager-6576b87f9c-cs8dg\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.696916 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cdbf888-563c-4590-bfbe-2bbb669e7ddb-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v4cfl\" (UID: \"8cdbf888-563c-4590-bfbe-2bbb669e7ddb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.697174 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-service-ca-bundle\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.697195 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05797a22-690b-4b36-8b4e-5dcc739f7cad-serving-cert\") pod \"route-controller-manager-6576b87f9c-cs8dg\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.697213 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgvw4\" (UniqueName: \"kubernetes.io/projected/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-kube-api-access-vgvw4\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.697230 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/120ba383-d275-47f8-b921-e976156f0035-config\") pod \"openshift-apiserver-operator-796bbdcf4f-74h4r\" (UID: \"120ba383-d275-47f8-b921-e976156f0035\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.697256 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.697275 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29d2a7e9-1fcb-4213-ae6c-753953bfae1a-trusted-ca\") pod \"console-operator-58897d9998-sf69z\" (UID: \"29d2a7e9-1fcb-4213-ae6c-753953bfae1a\") " pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.697293 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/120ba383-d275-47f8-b921-e976156f0035-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-74h4r\" (UID: \"120ba383-d275-47f8-b921-e976156f0035\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.697311 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st8zf\" (UniqueName: \"kubernetes.io/projected/120ba383-d275-47f8-b921-e976156f0035-kube-api-access-st8zf\") pod \"openshift-apiserver-operator-796bbdcf4f-74h4r\" (UID: \"120ba383-d275-47f8-b921-e976156f0035\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.698177 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vmzb5" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.699468 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.699913 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.700140 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.700698 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.701066 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.701194 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.701226 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.701325 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.701633 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.701822 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.709106 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kqqwj"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.710134 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.710809 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.711178 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kqqwj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.737429 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.737753 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.738011 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.738183 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.738266 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.738353 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.738492 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.738187 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.738834 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.739456 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.743098 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.743206 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.743338 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.745716 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.745746 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.756778 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k7tp7"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.757260 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.757294 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.757743 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.757399 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.757641 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.758222 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.758317 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9w662"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.758463 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.758594 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.758619 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.758783 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.758846 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.758786 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.758907 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.758794 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.759023 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.759076 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.759115 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.759195 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.759589 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.759722 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.760344 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vdn7t"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.760413 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.760756 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.761171 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.761703 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.762090 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vdn7t" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.762282 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.762310 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lt75x"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.762388 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.762405 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.762443 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.764597 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-lnc22"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.764634 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.765444 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.766870 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s55v7"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.768579 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.768818 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.769059 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.769231 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.769368 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.769535 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.772736 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.773212 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.773746 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.774744 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.784792 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.785170 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.786697 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.786869 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.791041 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wsjsc"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.793180 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.808848 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sf69z"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.814225 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35ef1add-69b2-424c-b5ff-7f18b915eae1-etcd-client\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.814389 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r44wn\" (UniqueName: \"kubernetes.io/projected/29d2a7e9-1fcb-4213-ae6c-753953bfae1a-kube-api-access-r44wn\") pod \"console-operator-58897d9998-sf69z\" (UID: \"29d2a7e9-1fcb-4213-ae6c-753953bfae1a\") " pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.814447 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/35ef1add-69b2-424c-b5ff-7f18b915eae1-encryption-config\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.814481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b522a8e-f795-4cf1-adbb-899674a5e359-config\") pod \"machine-api-operator-5694c8668f-hxjbf\" (UID: \"6b522a8e-f795-4cf1-adbb-899674a5e359\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.814519 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-audit\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.814549 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11e33088-50eb-423a-8925-87aa760c56e4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2jnbv\" (UID: \"11e33088-50eb-423a-8925-87aa760c56e4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.814569 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4lkb\" (UniqueName: \"kubernetes.io/projected/6b522a8e-f795-4cf1-adbb-899674a5e359-kube-api-access-l4lkb\") pod \"machine-api-operator-5694c8668f-hxjbf\" (UID: \"6b522a8e-f795-4cf1-adbb-899674a5e359\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.814596 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.814618 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-config\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.814636 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b522a8e-f795-4cf1-adbb-899674a5e359-images\") pod \"machine-api-operator-5694c8668f-hxjbf\" (UID: \"6b522a8e-f795-4cf1-adbb-899674a5e359\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.814668 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29d2a7e9-1fcb-4213-ae6c-753953bfae1a-serving-cert\") pod \"console-operator-58897d9998-sf69z\" (UID: \"29d2a7e9-1fcb-4213-ae6c-753953bfae1a\") " pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.814700 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-node-pullsecrets\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.814723 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a227a161-8e53-4817-b7b2-48206c4916fb-metrics-certs\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.815296 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05797a22-690b-4b36-8b4e-5dcc739f7cad-config\") pod \"route-controller-manager-6576b87f9c-cs8dg\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.817726 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05797a22-690b-4b36-8b4e-5dcc739f7cad-config\") pod \"route-controller-manager-6576b87f9c-cs8dg\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.818815 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.818909 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-config\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.819789 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a227a161-8e53-4817-b7b2-48206c4916fb-default-certificate\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.819836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-oauth-config\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.819869 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq8t5\" (UniqueName: \"kubernetes.io/projected/a227a161-8e53-4817-b7b2-48206c4916fb-kube-api-access-nq8t5\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.819939 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-serving-cert\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.819976 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f05b6093-516e-4a63-b2cc-8d6e6e2b2e57-trusted-ca\") pod \"ingress-operator-5b745b69d9-7csk9\" (UID: \"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.820002 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgglg\" (UniqueName: \"kubernetes.io/projected/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-kube-api-access-vgglg\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.820037 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lckpg\" (UniqueName: \"kubernetes.io/projected/35ef1add-69b2-424c-b5ff-7f18b915eae1-kube-api-access-lckpg\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.820065 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e33088-50eb-423a-8925-87aa760c56e4-config\") pod \"kube-controller-manager-operator-78b949d7b-2jnbv\" (UID: \"11e33088-50eb-423a-8925-87aa760c56e4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.820097 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6b39f4b8-90e3-4d3b-a000-725d42cdb8dd-machine-approver-tls\") pod \"machine-approver-56656f9798-2fjkt\" (UID: \"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.820129 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c93c42c7-c9ff-42cc-b604-e36f7a063fcf-serving-cert\") pod \"openshift-config-operator-7777fb866f-lbsmw\" (UID: \"c93c42c7-c9ff-42cc-b604-e36f7a063fcf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.820152 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8frp\" (UniqueName: \"kubernetes.io/projected/b06753a3-652a-4acc-b294-3ccaa5b0cb99-kube-api-access-c8frp\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.820180 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/319cad33-b4bc-4249-8124-1010cd6d79f9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rc96d\" (UID: \"319cad33-b4bc-4249-8124-1010cd6d79f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.820217 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05797a22-690b-4b36-8b4e-5dcc739f7cad-client-ca\") pod \"route-controller-manager-6576b87f9c-cs8dg\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.820276 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgqq9\" (UniqueName: \"kubernetes.io/projected/6b39f4b8-90e3-4d3b-a000-725d42cdb8dd-kube-api-access-wgqq9\") pod \"machine-approver-56656f9798-2fjkt\" (UID: \"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.819160 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.819146 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.820401 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.819854 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.820476 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.820172 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.820934 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.821299 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrzd7\" (UniqueName: \"kubernetes.io/projected/876c5dc3-b775-45cc-94b6-4339735e9975-kube-api-access-hrzd7\") pod \"downloads-7954f5f757-lnc22\" (UID: \"876c5dc3-b775-45cc-94b6-4339735e9975\") " pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.821384 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnw29\" (UniqueName: \"kubernetes.io/projected/05797a22-690b-4b36-8b4e-5dcc739f7cad-kube-api-access-gnw29\") pod \"route-controller-manager-6576b87f9c-cs8dg\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.821478 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cdbf888-563c-4590-bfbe-2bbb669e7ddb-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v4cfl\" (UID: \"8cdbf888-563c-4590-bfbe-2bbb669e7ddb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.821674 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.824503 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-encryption-config\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.824577 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-audit-dir\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.825150 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29d2a7e9-1fcb-4213-ae6c-753953bfae1a-serving-cert\") pod \"console-operator-58897d9998-sf69z\" (UID: \"29d2a7e9-1fcb-4213-ae6c-753953bfae1a\") " pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.825296 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-service-ca-bundle\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.825343 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05797a22-690b-4b36-8b4e-5dcc739f7cad-serving-cert\") pod \"route-controller-manager-6576b87f9c-cs8dg\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.825367 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35ef1add-69b2-424c-b5ff-7f18b915eae1-audit-dir\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.825393 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b39f4b8-90e3-4d3b-a000-725d42cdb8dd-auth-proxy-config\") pod \"machine-approver-56656f9798-2fjkt\" (UID: \"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.825799 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvq7h\" (UniqueName: \"kubernetes.io/projected/4d235feb-2891-4c16-b240-381a5810a0c7-kube-api-access-bvq7h\") pod \"machine-config-controller-84d6567774-w4x94\" (UID: \"4d235feb-2891-4c16-b240-381a5810a0c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.825858 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-trusted-ca-bundle\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.825919 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-service-ca-bundle\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.825957 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgvw4\" (UniqueName: \"kubernetes.io/projected/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-kube-api-access-vgvw4\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.825990 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/120ba383-d275-47f8-b921-e976156f0035-config\") pod \"openshift-apiserver-operator-796bbdcf4f-74h4r\" (UID: \"120ba383-d275-47f8-b921-e976156f0035\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.826103 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-lc7k5"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.826359 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f05b6093-516e-4a63-b2cc-8d6e6e2b2e57-metrics-tls\") pod \"ingress-operator-5b745b69d9-7csk9\" (UID: \"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.826485 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-oauth-serving-cert\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.826616 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e2ff1f0-ab87-4251-b1ea-c08cad288246-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c9vjp\" (UID: \"1e2ff1f0-ab87-4251-b1ea-c08cad288246\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.826752 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.826848 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29d2a7e9-1fcb-4213-ae6c-753953bfae1a-trusted-ca\") pod \"console-operator-58897d9998-sf69z\" (UID: \"29d2a7e9-1fcb-4213-ae6c-753953bfae1a\") " pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.826968 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/120ba383-d275-47f8-b921-e976156f0035-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-74h4r\" (UID: \"120ba383-d275-47f8-b921-e976156f0035\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.827073 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d235feb-2891-4c16-b240-381a5810a0c7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-w4x94\" (UID: \"4d235feb-2891-4c16-b240-381a5810a0c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.827141 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a227a161-8e53-4817-b7b2-48206c4916fb-stats-auth\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.826855 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/120ba383-d275-47f8-b921-e976156f0035-config\") pod \"openshift-apiserver-operator-796bbdcf4f-74h4r\" (UID: \"120ba383-d275-47f8-b921-e976156f0035\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.827345 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e2ff1f0-ab87-4251-b1ea-c08cad288246-config\") pod \"kube-apiserver-operator-766d6c64bb-c9vjp\" (UID: \"1e2ff1f0-ab87-4251-b1ea-c08cad288246\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.827683 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st8zf\" (UniqueName: \"kubernetes.io/projected/120ba383-d275-47f8-b921-e976156f0035-kube-api-access-st8zf\") pod \"openshift-apiserver-operator-796bbdcf4f-74h4r\" (UID: \"120ba383-d275-47f8-b921-e976156f0035\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.827839 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35ef1add-69b2-424c-b5ff-7f18b915eae1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828111 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-client-ca\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828191 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cdbf888-563c-4590-bfbe-2bbb669e7ddb-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v4cfl\" (UID: \"8cdbf888-563c-4590-bfbe-2bbb669e7ddb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828247 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcpmj\" (UniqueName: \"kubernetes.io/projected/f05b6093-516e-4a63-b2cc-8d6e6e2b2e57-kube-api-access-mcpmj\") pod \"ingress-operator-5b745b69d9-7csk9\" (UID: \"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828283 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-config\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828305 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-etcd-serving-ca\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828514 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4vnl\" (UniqueName: \"kubernetes.io/projected/29fa9711-dd2f-41bf-92dc-a6fd88a3f341-kube-api-access-m4vnl\") pod \"dns-operator-744455d44c-r8785\" (UID: \"29fa9711-dd2f-41bf-92dc-a6fd88a3f341\") " pod="openshift-dns-operator/dns-operator-744455d44c-r8785" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828530 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-service-ca\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828551 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b39f4b8-90e3-4d3b-a000-725d42cdb8dd-config\") pod \"machine-approver-56656f9798-2fjkt\" (UID: \"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828575 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pttq\" (UniqueName: \"kubernetes.io/projected/c93c42c7-c9ff-42cc-b604-e36f7a063fcf-kube-api-access-2pttq\") pod \"openshift-config-operator-7777fb866f-lbsmw\" (UID: \"c93c42c7-c9ff-42cc-b604-e36f7a063fcf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828599 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35ef1add-69b2-424c-b5ff-7f18b915eae1-serving-cert\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828617 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-serving-cert\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828659 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-etcd-client\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828678 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e2ff1f0-ab87-4251-b1ea-c08cad288246-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c9vjp\" (UID: \"1e2ff1f0-ab87-4251-b1ea-c08cad288246\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828730 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29fa9711-dd2f-41bf-92dc-a6fd88a3f341-metrics-tls\") pod \"dns-operator-744455d44c-r8785\" (UID: \"29fa9711-dd2f-41bf-92dc-a6fd88a3f341\") " pod="openshift-dns-operator/dns-operator-744455d44c-r8785" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828762 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4d235feb-2891-4c16-b240-381a5810a0c7-proxy-tls\") pod \"machine-config-controller-84d6567774-w4x94\" (UID: \"4d235feb-2891-4c16-b240-381a5810a0c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828783 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-config\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828806 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828823 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/35ef1add-69b2-424c-b5ff-7f18b915eae1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828844 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/319cad33-b4bc-4249-8124-1010cd6d79f9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rc96d\" (UID: \"319cad33-b4bc-4249-8124-1010cd6d79f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828868 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzjb6\" (UniqueName: \"kubernetes.io/projected/8cdbf888-563c-4590-bfbe-2bbb669e7ddb-kube-api-access-fzjb6\") pod \"openshift-controller-manager-operator-756b6f6bc6-v4cfl\" (UID: \"8cdbf888-563c-4590-bfbe-2bbb669e7ddb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.828902 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/35ef1add-69b2-424c-b5ff-7f18b915eae1-audit-policies\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.829128 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1339ee72-a846-4147-b494-55ef92897378-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8vn7s\" (UID: \"1339ee72-a846-4147-b494-55ef92897378\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.829158 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ngxx\" (UniqueName: \"kubernetes.io/projected/319cad33-b4bc-4249-8124-1010cd6d79f9-kube-api-access-7ngxx\") pod \"kube-storage-version-migrator-operator-b67b599dd-rc96d\" (UID: \"319cad33-b4bc-4249-8124-1010cd6d79f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.829184 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-serving-cert\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.829214 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11e33088-50eb-423a-8925-87aa760c56e4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2jnbv\" (UID: \"11e33088-50eb-423a-8925-87aa760c56e4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.829480 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x542b\" (UniqueName: \"kubernetes.io/projected/1339ee72-a846-4147-b494-55ef92897378-kube-api-access-x542b\") pod \"cluster-samples-operator-665b6dd947-8vn7s\" (UID: \"1339ee72-a846-4147-b494-55ef92897378\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.829517 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-serving-cert\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.829547 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52dr7\" (UniqueName: \"kubernetes.io/projected/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-kube-api-access-52dr7\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.829572 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d2a7e9-1fcb-4213-ae6c-753953bfae1a-config\") pod \"console-operator-58897d9998-sf69z\" (UID: \"29d2a7e9-1fcb-4213-ae6c-753953bfae1a\") " pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.829593 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-image-import-ca\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.829612 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c93c42c7-c9ff-42cc-b604-e36f7a063fcf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lbsmw\" (UID: \"c93c42c7-c9ff-42cc-b604-e36f7a063fcf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.829630 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a227a161-8e53-4817-b7b2-48206c4916fb-service-ca-bundle\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.829650 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b522a8e-f795-4cf1-adbb-899674a5e359-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hxjbf\" (UID: \"6b522a8e-f795-4cf1-adbb-899674a5e359\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.829670 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f05b6093-516e-4a63-b2cc-8d6e6e2b2e57-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7csk9\" (UID: \"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.830029 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29d2a7e9-1fcb-4213-ae6c-753953bfae1a-trusted-ca\") pod \"console-operator-58897d9998-sf69z\" (UID: \"29d2a7e9-1fcb-4213-ae6c-753953bfae1a\") " pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.830100 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lc7k5" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.830277 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-config\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.830322 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cdbf888-563c-4590-bfbe-2bbb669e7ddb-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v4cfl\" (UID: \"8cdbf888-563c-4590-bfbe-2bbb669e7ddb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.831347 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/120ba383-d275-47f8-b921-e976156f0035-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-74h4r\" (UID: \"120ba383-d275-47f8-b921-e976156f0035\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.831513 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.831466 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05797a22-690b-4b36-8b4e-5dcc739f7cad-client-ca\") pod \"route-controller-manager-6576b87f9c-cs8dg\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.831793 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.831868 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05797a22-690b-4b36-8b4e-5dcc739f7cad-serving-cert\") pod \"route-controller-manager-6576b87f9c-cs8dg\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.831987 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c93c42c7-c9ff-42cc-b604-e36f7a063fcf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lbsmw\" (UID: \"c93c42c7-c9ff-42cc-b604-e36f7a063fcf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.832497 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-config\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.832508 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d2a7e9-1fcb-4213-ae6c-753953bfae1a-config\") pod \"console-operator-58897d9998-sf69z\" (UID: \"29d2a7e9-1fcb-4213-ae6c-753953bfae1a\") " pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.833168 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-serving-cert\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.833417 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-client-ca\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.834392 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-serving-cert\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.835627 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gqwld"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.847644 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.848436 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c93c42c7-c9ff-42cc-b604-e36f7a063fcf-serving-cert\") pod \"openshift-config-operator-7777fb866f-lbsmw\" (UID: \"c93c42c7-c9ff-42cc-b604-e36f7a063fcf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.849718 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.849783 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.850927 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.852836 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.853775 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cdbf888-563c-4590-bfbe-2bbb669e7ddb-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v4cfl\" (UID: \"8cdbf888-563c-4590-bfbe-2bbb669e7ddb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.853959 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tw2nt"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.854427 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.855000 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.856130 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qpxpv"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.857177 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vmzb5"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.858410 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.861024 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wsjsc"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.861687 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hxjbf"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.863138 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.865174 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.867055 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.868871 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k7tp7"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.874344 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kqqwj"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.875363 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.875662 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r8785"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.877154 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9w662"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.878299 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.880104 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-526s7"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.880915 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-526s7" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.881356 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wfbd9"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.882402 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wfbd9" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.882787 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4gskq"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.884293 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z7vmj"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.885583 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.887089 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.888300 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vdn7t"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.889461 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.890792 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wfbd9"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.892343 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.893433 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-526s7"] Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.894482 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.914301 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931096 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f05b6093-516e-4a63-b2cc-8d6e6e2b2e57-metrics-tls\") pod \"ingress-operator-5b745b69d9-7csk9\" (UID: \"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931145 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-oauth-serving-cert\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931166 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e2ff1f0-ab87-4251-b1ea-c08cad288246-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c9vjp\" (UID: \"1e2ff1f0-ab87-4251-b1ea-c08cad288246\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931196 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d235feb-2891-4c16-b240-381a5810a0c7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-w4x94\" (UID: \"4d235feb-2891-4c16-b240-381a5810a0c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931231 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a227a161-8e53-4817-b7b2-48206c4916fb-stats-auth\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931255 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35ef1add-69b2-424c-b5ff-7f18b915eae1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931272 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e2ff1f0-ab87-4251-b1ea-c08cad288246-config\") pod \"kube-apiserver-operator-766d6c64bb-c9vjp\" (UID: \"1e2ff1f0-ab87-4251-b1ea-c08cad288246\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931286 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcpmj\" (UniqueName: \"kubernetes.io/projected/f05b6093-516e-4a63-b2cc-8d6e6e2b2e57-kube-api-access-mcpmj\") pod \"ingress-operator-5b745b69d9-7csk9\" (UID: \"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931310 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-etcd-serving-ca\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931325 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b39f4b8-90e3-4d3b-a000-725d42cdb8dd-config\") pod \"machine-approver-56656f9798-2fjkt\" (UID: \"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931341 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4vnl\" (UniqueName: \"kubernetes.io/projected/29fa9711-dd2f-41bf-92dc-a6fd88a3f341-kube-api-access-m4vnl\") pod \"dns-operator-744455d44c-r8785\" (UID: \"29fa9711-dd2f-41bf-92dc-a6fd88a3f341\") " pod="openshift-dns-operator/dns-operator-744455d44c-r8785" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931354 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-service-ca\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931374 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35ef1add-69b2-424c-b5ff-7f18b915eae1-serving-cert\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931388 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-serving-cert\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931435 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-etcd-client\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931451 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4d235feb-2891-4c16-b240-381a5810a0c7-proxy-tls\") pod \"machine-config-controller-84d6567774-w4x94\" (UID: \"4d235feb-2891-4c16-b240-381a5810a0c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931467 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-config\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931482 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931496 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/35ef1add-69b2-424c-b5ff-7f18b915eae1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931510 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e2ff1f0-ab87-4251-b1ea-c08cad288246-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c9vjp\" (UID: \"1e2ff1f0-ab87-4251-b1ea-c08cad288246\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931525 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29fa9711-dd2f-41bf-92dc-a6fd88a3f341-metrics-tls\") pod \"dns-operator-744455d44c-r8785\" (UID: \"29fa9711-dd2f-41bf-92dc-a6fd88a3f341\") " pod="openshift-dns-operator/dns-operator-744455d44c-r8785" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931553 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/319cad33-b4bc-4249-8124-1010cd6d79f9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rc96d\" (UID: \"319cad33-b4bc-4249-8124-1010cd6d79f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931589 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/35ef1add-69b2-424c-b5ff-7f18b915eae1-audit-policies\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1339ee72-a846-4147-b494-55ef92897378-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8vn7s\" (UID: \"1339ee72-a846-4147-b494-55ef92897378\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931637 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ngxx\" (UniqueName: \"kubernetes.io/projected/319cad33-b4bc-4249-8124-1010cd6d79f9-kube-api-access-7ngxx\") pod \"kube-storage-version-migrator-operator-b67b599dd-rc96d\" (UID: \"319cad33-b4bc-4249-8124-1010cd6d79f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931659 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x542b\" (UniqueName: \"kubernetes.io/projected/1339ee72-a846-4147-b494-55ef92897378-kube-api-access-x542b\") pod \"cluster-samples-operator-665b6dd947-8vn7s\" (UID: \"1339ee72-a846-4147-b494-55ef92897378\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931677 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-serving-cert\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931698 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11e33088-50eb-423a-8925-87aa760c56e4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2jnbv\" (UID: \"11e33088-50eb-423a-8925-87aa760c56e4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931730 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-image-import-ca\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931880 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b522a8e-f795-4cf1-adbb-899674a5e359-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hxjbf\" (UID: \"6b522a8e-f795-4cf1-adbb-899674a5e359\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931915 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a227a161-8e53-4817-b7b2-48206c4916fb-service-ca-bundle\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931931 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f05b6093-516e-4a63-b2cc-8d6e6e2b2e57-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7csk9\" (UID: \"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931958 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35ef1add-69b2-424c-b5ff-7f18b915eae1-etcd-client\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931973 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/35ef1add-69b2-424c-b5ff-7f18b915eae1-encryption-config\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.931988 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b522a8e-f795-4cf1-adbb-899674a5e359-config\") pod \"machine-api-operator-5694c8668f-hxjbf\" (UID: \"6b522a8e-f795-4cf1-adbb-899674a5e359\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932010 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-audit\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932026 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11e33088-50eb-423a-8925-87aa760c56e4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2jnbv\" (UID: \"11e33088-50eb-423a-8925-87aa760c56e4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932041 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4lkb\" (UniqueName: \"kubernetes.io/projected/6b522a8e-f795-4cf1-adbb-899674a5e359-kube-api-access-l4lkb\") pod \"machine-api-operator-5694c8668f-hxjbf\" (UID: \"6b522a8e-f795-4cf1-adbb-899674a5e359\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932059 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-config\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932074 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-node-pullsecrets\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932090 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b522a8e-f795-4cf1-adbb-899674a5e359-images\") pod \"machine-api-operator-5694c8668f-hxjbf\" (UID: \"6b522a8e-f795-4cf1-adbb-899674a5e359\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932105 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a227a161-8e53-4817-b7b2-48206c4916fb-default-certificate\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932120 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a227a161-8e53-4817-b7b2-48206c4916fb-metrics-certs\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932206 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-oauth-config\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932235 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq8t5\" (UniqueName: \"kubernetes.io/projected/a227a161-8e53-4817-b7b2-48206c4916fb-kube-api-access-nq8t5\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932258 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f05b6093-516e-4a63-b2cc-8d6e6e2b2e57-trusted-ca\") pod \"ingress-operator-5b745b69d9-7csk9\" (UID: \"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932279 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgglg\" (UniqueName: \"kubernetes.io/projected/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-kube-api-access-vgglg\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932302 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6b39f4b8-90e3-4d3b-a000-725d42cdb8dd-machine-approver-tls\") pod \"machine-approver-56656f9798-2fjkt\" (UID: \"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932319 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lckpg\" (UniqueName: \"kubernetes.io/projected/35ef1add-69b2-424c-b5ff-7f18b915eae1-kube-api-access-lckpg\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932333 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e33088-50eb-423a-8925-87aa760c56e4-config\") pod \"kube-controller-manager-operator-78b949d7b-2jnbv\" (UID: \"11e33088-50eb-423a-8925-87aa760c56e4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932349 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8frp\" (UniqueName: \"kubernetes.io/projected/b06753a3-652a-4acc-b294-3ccaa5b0cb99-kube-api-access-c8frp\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932363 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/319cad33-b4bc-4249-8124-1010cd6d79f9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rc96d\" (UID: \"319cad33-b4bc-4249-8124-1010cd6d79f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932378 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgqq9\" (UniqueName: \"kubernetes.io/projected/6b39f4b8-90e3-4d3b-a000-725d42cdb8dd-kube-api-access-wgqq9\") pod \"machine-approver-56656f9798-2fjkt\" (UID: \"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932419 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-encryption-config\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932434 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-audit-dir\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932464 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35ef1add-69b2-424c-b5ff-7f18b915eae1-audit-dir\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932481 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-trusted-ca-bundle\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932502 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b39f4b8-90e3-4d3b-a000-725d42cdb8dd-auth-proxy-config\") pod \"machine-approver-56656f9798-2fjkt\" (UID: \"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.932518 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvq7h\" (UniqueName: \"kubernetes.io/projected/4d235feb-2891-4c16-b240-381a5810a0c7-kube-api-access-bvq7h\") pod \"machine-config-controller-84d6567774-w4x94\" (UID: \"4d235feb-2891-4c16-b240-381a5810a0c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.933939 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/35ef1add-69b2-424c-b5ff-7f18b915eae1-audit-policies\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.933993 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-config\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.934156 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b522a8e-f795-4cf1-adbb-899674a5e359-images\") pod \"machine-api-operator-5694c8668f-hxjbf\" (UID: \"6b522a8e-f795-4cf1-adbb-899674a5e359\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.934319 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.934623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-oauth-serving-cert\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.934876 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/35ef1add-69b2-424c-b5ff-7f18b915eae1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.934913 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35ef1add-69b2-424c-b5ff-7f18b915eae1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.935184 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b522a8e-f795-4cf1-adbb-899674a5e359-config\") pod \"machine-api-operator-5694c8668f-hxjbf\" (UID: \"6b522a8e-f795-4cf1-adbb-899674a5e359\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.935587 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-etcd-serving-ca\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.935867 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-audit\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.936116 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-service-ca\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.936367 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.936546 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/35ef1add-69b2-424c-b5ff-7f18b915eae1-encryption-config\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.936582 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d235feb-2891-4c16-b240-381a5810a0c7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-w4x94\" (UID: \"4d235feb-2891-4c16-b240-381a5810a0c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.936646 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35ef1add-69b2-424c-b5ff-7f18b915eae1-audit-dir\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.936700 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-config\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.936754 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-node-pullsecrets\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.936783 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-audit-dir\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.937194 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/319cad33-b4bc-4249-8124-1010cd6d79f9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rc96d\" (UID: \"319cad33-b4bc-4249-8124-1010cd6d79f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.937551 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1339ee72-a846-4147-b494-55ef92897378-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8vn7s\" (UID: \"1339ee72-a846-4147-b494-55ef92897378\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.937550 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e2ff1f0-ab87-4251-b1ea-c08cad288246-config\") pod \"kube-apiserver-operator-766d6c64bb-c9vjp\" (UID: \"1e2ff1f0-ab87-4251-b1ea-c08cad288246\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.937721 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-trusted-ca-bundle\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.937917 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b39f4b8-90e3-4d3b-a000-725d42cdb8dd-auth-proxy-config\") pod \"machine-approver-56656f9798-2fjkt\" (UID: \"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.938603 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e2ff1f0-ab87-4251-b1ea-c08cad288246-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c9vjp\" (UID: \"1e2ff1f0-ab87-4251-b1ea-c08cad288246\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.938615 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-image-import-ca\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.938756 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b39f4b8-90e3-4d3b-a000-725d42cdb8dd-config\") pod \"machine-approver-56656f9798-2fjkt\" (UID: \"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.939173 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-etcd-client\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.939220 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b522a8e-f795-4cf1-adbb-899674a5e359-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hxjbf\" (UID: \"6b522a8e-f795-4cf1-adbb-899674a5e359\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.939912 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6b39f4b8-90e3-4d3b-a000-725d42cdb8dd-machine-approver-tls\") pod \"machine-approver-56656f9798-2fjkt\" (UID: \"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.939958 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-encryption-config\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.940456 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-oauth-config\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.941009 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35ef1add-69b2-424c-b5ff-7f18b915eae1-serving-cert\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.941313 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/319cad33-b4bc-4249-8124-1010cd6d79f9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rc96d\" (UID: \"319cad33-b4bc-4249-8124-1010cd6d79f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.941673 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-serving-cert\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.941807 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-serving-cert\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.943790 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35ef1add-69b2-424c-b5ff-7f18b915eae1-etcd-client\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.954474 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.974955 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.980461 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29fa9711-dd2f-41bf-92dc-a6fd88a3f341-metrics-tls\") pod \"dns-operator-744455d44c-r8785\" (UID: \"29fa9711-dd2f-41bf-92dc-a6fd88a3f341\") " pod="openshift-dns-operator/dns-operator-744455d44c-r8785" Feb 03 12:07:01 crc kubenswrapper[4820]: I0203 12:07:01.994646 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.015337 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.034950 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.056164 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.074487 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.095680 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.114816 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.135479 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.154703 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.166376 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f05b6093-516e-4a63-b2cc-8d6e6e2b2e57-metrics-tls\") pod \"ingress-operator-5b745b69d9-7csk9\" (UID: \"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.183527 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.186950 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f05b6093-516e-4a63-b2cc-8d6e6e2b2e57-trusted-ca\") pod \"ingress-operator-5b745b69d9-7csk9\" (UID: \"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.195342 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.215156 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.223591 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11e33088-50eb-423a-8925-87aa760c56e4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2jnbv\" (UID: \"11e33088-50eb-423a-8925-87aa760c56e4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.235160 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.255197 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.257649 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e33088-50eb-423a-8925-87aa760c56e4-config\") pod \"kube-controller-manager-operator-78b949d7b-2jnbv\" (UID: \"11e33088-50eb-423a-8925-87aa760c56e4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.275090 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.294954 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.299608 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4d235feb-2891-4c16-b240-381a5810a0c7-proxy-tls\") pod \"machine-config-controller-84d6567774-w4x94\" (UID: \"4d235feb-2891-4c16-b240-381a5810a0c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.316005 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.335959 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.355599 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.375429 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.396143 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.415218 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.435361 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.446523 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a227a161-8e53-4817-b7b2-48206c4916fb-default-certificate\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.455756 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.459715 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a227a161-8e53-4817-b7b2-48206c4916fb-stats-auth\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.475618 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.486208 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a227a161-8e53-4817-b7b2-48206c4916fb-metrics-certs\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.494954 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.497545 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a227a161-8e53-4817-b7b2-48206c4916fb-service-ca-bundle\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.515084 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.535102 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.555204 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.584114 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.595332 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.615420 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.635212 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.655493 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.695603 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.714939 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.734869 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.754823 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.773251 4820 request.go:700] Waited for 1.015129921s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0 Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.775213 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.794734 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.815305 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.835271 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.855524 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.896319 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.915879 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.935757 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.954979 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.985621 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 03 12:07:02 crc kubenswrapper[4820]: I0203 12:07:02.995655 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.015572 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.035251 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.055505 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.075441 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.095587 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.116545 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.136195 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.155034 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.174969 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.194482 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.215825 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.234513 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.255356 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.275315 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.295985 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.316292 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.335929 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.354918 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.375441 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.395034 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.416196 4820 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.436205 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.455768 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.493007 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r44wn\" (UniqueName: \"kubernetes.io/projected/29d2a7e9-1fcb-4213-ae6c-753953bfae1a-kube-api-access-r44wn\") pod \"console-operator-58897d9998-sf69z\" (UID: \"29d2a7e9-1fcb-4213-ae6c-753953bfae1a\") " pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.508863 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrzd7\" (UniqueName: \"kubernetes.io/projected/876c5dc3-b775-45cc-94b6-4339735e9975-kube-api-access-hrzd7\") pod \"downloads-7954f5f757-lnc22\" (UID: \"876c5dc3-b775-45cc-94b6-4339735e9975\") " pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.528610 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnw29\" (UniqueName: \"kubernetes.io/projected/05797a22-690b-4b36-8b4e-5dcc739f7cad-kube-api-access-gnw29\") pod \"route-controller-manager-6576b87f9c-cs8dg\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.545859 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.548226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgvw4\" (UniqueName: \"kubernetes.io/projected/2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7-kube-api-access-vgvw4\") pod \"authentication-operator-69f744f599-s55v7\" (UID: \"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.553225 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.560519 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.571457 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st8zf\" (UniqueName: \"kubernetes.io/projected/120ba383-d275-47f8-b921-e976156f0035-kube-api-access-st8zf\") pod \"openshift-apiserver-operator-796bbdcf4f-74h4r\" (UID: \"120ba383-d275-47f8-b921-e976156f0035\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.590219 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzjb6\" (UniqueName: \"kubernetes.io/projected/8cdbf888-563c-4590-bfbe-2bbb669e7ddb-kube-api-access-fzjb6\") pod \"openshift-controller-manager-operator-756b6f6bc6-v4cfl\" (UID: \"8cdbf888-563c-4590-bfbe-2bbb669e7ddb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.610773 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pttq\" (UniqueName: \"kubernetes.io/projected/c93c42c7-c9ff-42cc-b604-e36f7a063fcf-kube-api-access-2pttq\") pod \"openshift-config-operator-7777fb866f-lbsmw\" (UID: \"c93c42c7-c9ff-42cc-b604-e36f7a063fcf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.614864 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.651679 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52dr7\" (UniqueName: \"kubernetes.io/projected/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-kube-api-access-52dr7\") pod \"controller-manager-879f6c89f-lt75x\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.655780 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.675184 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.696204 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.715556 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.735396 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.756204 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.767618 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-lnc22"] Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.770432 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.775056 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 03 12:07:03 crc kubenswrapper[4820]: W0203 12:07:03.775637 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod876c5dc3_b775_45cc_94b6_4339735e9975.slice/crio-7726f27bfc435b873e68ad55404702a430cf052401714af715036175a6d96a52 WatchSource:0}: Error finding container 7726f27bfc435b873e68ad55404702a430cf052401714af715036175a6d96a52: Status 404 returned error can't find the container with id 7726f27bfc435b873e68ad55404702a430cf052401714af715036175a6d96a52 Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.793141 4820 request.go:700] Waited for 1.910527723s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0 Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.794291 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.794873 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.796699 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.798147 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lnc22" event={"ID":"876c5dc3-b775-45cc-94b6-4339735e9975","Type":"ContainerStarted","Data":"7726f27bfc435b873e68ad55404702a430cf052401714af715036175a6d96a52"} Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.805061 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.813057 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.814833 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.857477 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvq7h\" (UniqueName: \"kubernetes.io/projected/4d235feb-2891-4c16-b240-381a5810a0c7-kube-api-access-bvq7h\") pod \"machine-config-controller-84d6567774-w4x94\" (UID: \"4d235feb-2891-4c16-b240-381a5810a0c7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.873537 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4vnl\" (UniqueName: \"kubernetes.io/projected/29fa9711-dd2f-41bf-92dc-a6fd88a3f341-kube-api-access-m4vnl\") pod \"dns-operator-744455d44c-r8785\" (UID: \"29fa9711-dd2f-41bf-92dc-a6fd88a3f341\") " pod="openshift-dns-operator/dns-operator-744455d44c-r8785" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.905565 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ngxx\" (UniqueName: \"kubernetes.io/projected/319cad33-b4bc-4249-8124-1010cd6d79f9-kube-api-access-7ngxx\") pod \"kube-storage-version-migrator-operator-b67b599dd-rc96d\" (UID: \"319cad33-b4bc-4249-8124-1010cd6d79f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.921191 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x542b\" (UniqueName: \"kubernetes.io/projected/1339ee72-a846-4147-b494-55ef92897378-kube-api-access-x542b\") pod \"cluster-samples-operator-665b6dd947-8vn7s\" (UID: \"1339ee72-a846-4147-b494-55ef92897378\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.930070 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq8t5\" (UniqueName: \"kubernetes.io/projected/a227a161-8e53-4817-b7b2-48206c4916fb-kube-api-access-nq8t5\") pod \"router-default-5444994796-h22tk\" (UID: \"a227a161-8e53-4817-b7b2-48206c4916fb\") " pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.954429 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e2ff1f0-ab87-4251-b1ea-c08cad288246-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c9vjp\" (UID: \"1e2ff1f0-ab87-4251-b1ea-c08cad288246\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.965074 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sf69z"] Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.965115 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg"] Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.972787 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgglg\" (UniqueName: \"kubernetes.io/projected/4acfe638-6e10-4d68-9cfa-3d1e1d4c1052-kube-api-access-vgglg\") pod \"apiserver-76f77b778f-z7vmj\" (UID: \"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052\") " pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.974267 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lt75x"] Feb 03 12:07:03 crc kubenswrapper[4820]: W0203 12:07:03.979115 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29d2a7e9_1fcb_4213_ae6c_753953bfae1a.slice/crio-90ce6601baf02833adf79a8d48fde10dfe66550b361315deb215a01ba5585ee2 WatchSource:0}: Error finding container 90ce6601baf02833adf79a8d48fde10dfe66550b361315deb215a01ba5585ee2: Status 404 returned error can't find the container with id 90ce6601baf02833adf79a8d48fde10dfe66550b361315deb215a01ba5585ee2 Feb 03 12:07:03 crc kubenswrapper[4820]: W0203 12:07:03.984450 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05797a22_690b_4b36_8b4e_5dcc739f7cad.slice/crio-7182fedced2758ecd7ecb9e7da64193d528b8761d3f257c54f2fdc1bd7f1fb6d WatchSource:0}: Error finding container 7182fedced2758ecd7ecb9e7da64193d528b8761d3f257c54f2fdc1bd7f1fb6d: Status 404 returned error can't find the container with id 7182fedced2758ecd7ecb9e7da64193d528b8761d3f257c54f2fdc1bd7f1fb6d Feb 03 12:07:03 crc kubenswrapper[4820]: I0203 12:07:03.994500 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcpmj\" (UniqueName: \"kubernetes.io/projected/f05b6093-516e-4a63-b2cc-8d6e6e2b2e57-kube-api-access-mcpmj\") pod \"ingress-operator-5b745b69d9-7csk9\" (UID: \"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.013587 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11e33088-50eb-423a-8925-87aa760c56e4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2jnbv\" (UID: \"11e33088-50eb-423a-8925-87aa760c56e4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.021015 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.033210 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4lkb\" (UniqueName: \"kubernetes.io/projected/6b522a8e-f795-4cf1-adbb-899674a5e359-kube-api-access-l4lkb\") pod \"machine-api-operator-5694c8668f-hxjbf\" (UID: \"6b522a8e-f795-4cf1-adbb-899674a5e359\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.029518 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-r8785" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.031300 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s55v7"] Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.023936 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.058924 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgqq9\" (UniqueName: \"kubernetes.io/projected/6b39f4b8-90e3-4d3b-a000-725d42cdb8dd-kube-api-access-wgqq9\") pod \"machine-approver-56656f9798-2fjkt\" (UID: \"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.069407 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.072612 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lckpg\" (UniqueName: \"kubernetes.io/projected/35ef1add-69b2-424c-b5ff-7f18b915eae1-kube-api-access-lckpg\") pod \"apiserver-7bbb656c7d-b9krf\" (UID: \"35ef1add-69b2-424c-b5ff-7f18b915eae1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.080567 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.091575 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8frp\" (UniqueName: \"kubernetes.io/projected/b06753a3-652a-4acc-b294-3ccaa5b0cb99-kube-api-access-c8frp\") pod \"console-f9d7485db-tw2nt\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.092974 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.107345 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f05b6093-516e-4a63-b2cc-8d6e6e2b2e57-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7csk9\" (UID: \"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.114776 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw"] Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.162976 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-audit-policies\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163042 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163074 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-audit-dir\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163090 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2t74c\" (UID: \"a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163120 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163156 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163171 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5-srv-cert\") pod \"catalog-operator-68c6474976-8rb2x\" (UID: \"d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163188 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2t74c\" (UID: \"a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163205 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163223 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163258 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163284 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm8gk\" (UniqueName: \"kubernetes.io/projected/a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4-kube-api-access-zm8gk\") pod \"cluster-image-registry-operator-dc59b4c8b-2t74c\" (UID: \"a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163319 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-bound-sa-token\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163364 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8237a118-001c-483c-8810-d051f33d35eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gqwld\" (UID: \"8237a118-001c-483c-8810-d051f33d35eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gqwld" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163381 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc757\" (UniqueName: \"kubernetes.io/projected/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-kube-api-access-hc757\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163398 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/10fa9c2b-e370-400e-9e71-a4617592b411-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163414 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163430 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7h2x\" (UniqueName: \"kubernetes.io/projected/d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5-kube-api-access-p7h2x\") pod \"catalog-operator-68c6474976-8rb2x\" (UID: \"d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163485 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/10fa9c2b-e370-400e-9e71-a4617592b411-registry-certificates\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163500 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2t74c\" (UID: \"a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163515 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9plt8\" (UniqueName: \"kubernetes.io/projected/8237a118-001c-483c-8810-d051f33d35eb-kube-api-access-9plt8\") pod \"control-plane-machine-set-operator-78cbb6b69f-gqwld\" (UID: \"8237a118-001c-483c-8810-d051f33d35eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gqwld" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163533 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163778 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163844 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163871 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163927 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6aba4201-b6b2-4aed-adeb-513e9190efa9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kqqwj\" (UID: \"6aba4201-b6b2-4aed-adeb-513e9190efa9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kqqwj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163956 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/10fa9c2b-e370-400e-9e71-a4617592b411-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.163981 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-registry-tls\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.164001 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10fa9c2b-e370-400e-9e71-a4617592b411-trusted-ca\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.164030 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5-profile-collector-cert\") pod \"catalog-operator-68c6474976-8rb2x\" (UID: \"d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.164079 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjrn\" (UniqueName: \"kubernetes.io/projected/d79a4fe5-aca8-4046-8760-2892f6e3dc7d-kube-api-access-qxjrn\") pod \"migrator-59844c95c7-vmzb5\" (UID: \"d79a4fe5-aca8-4046-8760-2892f6e3dc7d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vmzb5" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.164104 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xnkl\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-kube-api-access-6xnkl\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.164127 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw8c5\" (UniqueName: \"kubernetes.io/projected/6aba4201-b6b2-4aed-adeb-513e9190efa9-kube-api-access-bw8c5\") pod \"multus-admission-controller-857f4d67dd-kqqwj\" (UID: \"6aba4201-b6b2-4aed-adeb-513e9190efa9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kqqwj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.164431 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: E0203 12:07:04.164547 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:04.664535563 +0000 UTC m=+142.187611427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.172612 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.187287 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.227971 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.264247 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r8785"] Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.266826 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r"] Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.267487 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.267724 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.267771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7587d06-772a-477c-9503-4af59b74f082-serving-cert\") pod \"service-ca-operator-777779d784-5kfzf\" (UID: \"b7587d06-772a-477c-9503-4af59b74f082\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.267792 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/18148611-705a-4276-a9e3-9659f38654a8-etcd-service-ca\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.267812 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvls7\" (UniqueName: \"kubernetes.io/projected/49dae199-6b32-4904-862e-aff8cb8c4946-kube-api-access-jvls7\") pod \"ingress-canary-526s7\" (UID: \"49dae199-6b32-4904-862e-aff8cb8c4946\") " pod="openshift-ingress-canary/ingress-canary-526s7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.267854 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g55j4\" (UniqueName: \"kubernetes.io/projected/577cff0c-0386-467f-8a44-314a922051e2-kube-api-access-g55j4\") pod \"package-server-manager-789f6589d5-8q9q7\" (UID: \"577cff0c-0386-467f-8a44-314a922051e2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.267875 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51e0f75d-f0eb-4e04-a1ef-da8f256a845d-proxy-tls\") pod \"machine-config-operator-74547568cd-58ssn\" (UID: \"51e0f75d-f0eb-4e04-a1ef-da8f256a845d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.267946 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51e0f75d-f0eb-4e04-a1ef-da8f256a845d-images\") pod \"machine-config-operator-74547568cd-58ssn\" (UID: \"51e0f75d-f0eb-4e04-a1ef-da8f256a845d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.267974 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3677a387-0bac-4cea-9921-c93b14cd430e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drcgk\" (UID: \"3677a387-0bac-4cea-9921-c93b14cd430e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.267995 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-csi-data-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268015 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhs9g\" (UniqueName: \"kubernetes.io/projected/52996a75-b03e-40f5-a587-2c1476910cd4-kube-api-access-dhs9g\") pod \"olm-operator-6b444d44fb-c7gsf\" (UID: \"52996a75-b03e-40f5-a587-2c1476910cd4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268034 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18148611-705a-4276-a9e3-9659f38654a8-config\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268053 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1b5d200f-9fc0-42f9-96e3-ebec60c47a05-signing-key\") pod \"service-ca-9c57cc56f-vdn7t\" (UID: \"1b5d200f-9fc0-42f9-96e3-ebec60c47a05\") " pod="openshift-service-ca/service-ca-9c57cc56f-vdn7t" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268138 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz6mb\" (UniqueName: \"kubernetes.io/projected/1b5d200f-9fc0-42f9-96e3-ebec60c47a05-kube-api-access-lz6mb\") pod \"service-ca-9c57cc56f-vdn7t\" (UID: \"1b5d200f-9fc0-42f9-96e3-ebec60c47a05\") " pod="openshift-service-ca/service-ca-9c57cc56f-vdn7t" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268165 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268187 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268211 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7b253c2b-c072-449a-8a7a-915da7526653-node-bootstrap-token\") pod \"machine-config-server-lc7k5\" (UID: \"7b253c2b-c072-449a-8a7a-915da7526653\") " pod="openshift-machine-config-operator/machine-config-server-lc7k5" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268234 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6aba4201-b6b2-4aed-adeb-513e9190efa9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kqqwj\" (UID: \"6aba4201-b6b2-4aed-adeb-513e9190efa9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kqqwj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268279 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5jlx\" (UniqueName: \"kubernetes.io/projected/92dde085-8a2b-4c9f-947f-441ea67b8622-kube-api-access-q5jlx\") pod \"marketplace-operator-79b997595-9w662\" (UID: \"92dde085-8a2b-4c9f-947f-441ea67b8622\") " pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268315 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/10fa9c2b-e370-400e-9e71-a4617592b411-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268337 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-registry-tls\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268358 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10fa9c2b-e370-400e-9e71-a4617592b411-trusted-ca\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268466 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/52996a75-b03e-40f5-a587-2c1476910cd4-srv-cert\") pod \"olm-operator-6b444d44fb-c7gsf\" (UID: \"52996a75-b03e-40f5-a587-2c1476910cd4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268505 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5-profile-collector-cert\") pod \"catalog-operator-68c6474976-8rb2x\" (UID: \"d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268526 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1b5d200f-9fc0-42f9-96e3-ebec60c47a05-signing-cabundle\") pod \"service-ca-9c57cc56f-vdn7t\" (UID: \"1b5d200f-9fc0-42f9-96e3-ebec60c47a05\") " pod="openshift-service-ca/service-ca-9c57cc56f-vdn7t" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268561 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxjrn\" (UniqueName: \"kubernetes.io/projected/d79a4fe5-aca8-4046-8760-2892f6e3dc7d-kube-api-access-qxjrn\") pod \"migrator-59844c95c7-vmzb5\" (UID: \"d79a4fe5-aca8-4046-8760-2892f6e3dc7d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vmzb5" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268585 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/31277b5e-7869-4612-ba40-dcd0a37153fb-tmpfs\") pod \"packageserver-d55dfcdfc-b6ghj\" (UID: \"31277b5e-7869-4612-ba40-dcd0a37153fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268606 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d4ff9542-77a8-4e10-b4a0-8ab831c57b35-metrics-tls\") pod \"dns-default-wfbd9\" (UID: \"d4ff9542-77a8-4e10-b4a0-8ab831c57b35\") " pod="openshift-dns/dns-default-wfbd9" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268628 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/52996a75-b03e-40f5-a587-2c1476910cd4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c7gsf\" (UID: \"52996a75-b03e-40f5-a587-2c1476910cd4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268665 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xnkl\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-kube-api-access-6xnkl\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw8c5\" (UniqueName: \"kubernetes.io/projected/6aba4201-b6b2-4aed-adeb-513e9190efa9-kube-api-access-bw8c5\") pod \"multus-admission-controller-857f4d67dd-kqqwj\" (UID: \"6aba4201-b6b2-4aed-adeb-513e9190efa9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kqqwj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268709 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d628ea-493d-4b0c-b4a2-194cef62a08e-secret-volume\") pod \"collect-profiles-29502000-hcscr\" (UID: \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268753 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268775 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7587d06-772a-477c-9503-4af59b74f082-config\") pod \"service-ca-operator-777779d784-5kfzf\" (UID: \"b7587d06-772a-477c-9503-4af59b74f082\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268810 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8mv8\" (UniqueName: \"kubernetes.io/projected/b9d628ea-493d-4b0c-b4a2-194cef62a08e-kube-api-access-l8mv8\") pod \"collect-profiles-29502000-hcscr\" (UID: \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268854 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-audit-policies\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268912 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268935 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7b253c2b-c072-449a-8a7a-915da7526653-certs\") pod \"machine-config-server-lc7k5\" (UID: \"7b253c2b-c072-449a-8a7a-915da7526653\") " pod="openshift-machine-config-operator/machine-config-server-lc7k5" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268971 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9vvc\" (UniqueName: \"kubernetes.io/projected/7b253c2b-c072-449a-8a7a-915da7526653-kube-api-access-b9vvc\") pod \"machine-config-server-lc7k5\" (UID: \"7b253c2b-c072-449a-8a7a-915da7526653\") " pod="openshift-machine-config-operator/machine-config-server-lc7k5" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.268992 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7x2r\" (UniqueName: \"kubernetes.io/projected/51e0f75d-f0eb-4e04-a1ef-da8f256a845d-kube-api-access-f7x2r\") pod \"machine-config-operator-74547568cd-58ssn\" (UID: \"51e0f75d-f0eb-4e04-a1ef-da8f256a845d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269013 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3677a387-0bac-4cea-9921-c93b14cd430e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drcgk\" (UID: \"3677a387-0bac-4cea-9921-c93b14cd430e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269033 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-mountpoint-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269121 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-audit-dir\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269144 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2t74c\" (UID: \"a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269246 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4ff9542-77a8-4e10-b4a0-8ab831c57b35-config-volume\") pod \"dns-default-wfbd9\" (UID: \"d4ff9542-77a8-4e10-b4a0-8ab831c57b35\") " pod="openshift-dns/dns-default-wfbd9" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269296 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269343 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269366 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5-srv-cert\") pod \"catalog-operator-68c6474976-8rb2x\" (UID: \"d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269401 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92dde085-8a2b-4c9f-947f-441ea67b8622-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9w662\" (UID: \"92dde085-8a2b-4c9f-947f-441ea67b8622\") " pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269424 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2t74c\" (UID: \"a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269466 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269489 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq5hs\" (UniqueName: \"kubernetes.io/projected/18148611-705a-4276-a9e3-9659f38654a8-kube-api-access-fq5hs\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269573 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm8gk\" (UniqueName: \"kubernetes.io/projected/a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4-kube-api-access-zm8gk\") pod \"cluster-image-registry-operator-dc59b4c8b-2t74c\" (UID: \"a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269611 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdf9b\" (UniqueName: \"kubernetes.io/projected/31277b5e-7869-4612-ba40-dcd0a37153fb-kube-api-access-wdf9b\") pod \"packageserver-d55dfcdfc-b6ghj\" (UID: \"31277b5e-7869-4612-ba40-dcd0a37153fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269631 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jm54\" (UniqueName: \"kubernetes.io/projected/b7587d06-772a-477c-9503-4af59b74f082-kube-api-access-5jm54\") pod \"service-ca-operator-777779d784-5kfzf\" (UID: \"b7587d06-772a-477c-9503-4af59b74f082\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269676 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-registration-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269699 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-bound-sa-token\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269720 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51e0f75d-f0eb-4e04-a1ef-da8f256a845d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-58ssn\" (UID: \"51e0f75d-f0eb-4e04-a1ef-da8f256a845d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269760 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8237a118-001c-483c-8810-d051f33d35eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gqwld\" (UID: \"8237a118-001c-483c-8810-d051f33d35eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gqwld" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269785 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc757\" (UniqueName: \"kubernetes.io/projected/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-kube-api-access-hc757\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269808 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/10fa9c2b-e370-400e-9e71-a4617592b411-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269830 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv9x2\" (UniqueName: \"kubernetes.io/projected/b460558b-ba3e-4543-bb57-debddb0711e7-kube-api-access-jv9x2\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269857 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.269878 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-socket-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.271547 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7h2x\" (UniqueName: \"kubernetes.io/projected/d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5-kube-api-access-p7h2x\") pod \"catalog-operator-68c6474976-8rb2x\" (UID: \"d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.271572 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31277b5e-7869-4612-ba40-dcd0a37153fb-apiservice-cert\") pod \"packageserver-d55dfcdfc-b6ghj\" (UID: \"31277b5e-7869-4612-ba40-dcd0a37153fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.271615 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/18148611-705a-4276-a9e3-9659f38654a8-etcd-ca\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.271646 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d628ea-493d-4b0c-b4a2-194cef62a08e-config-volume\") pod \"collect-profiles-29502000-hcscr\" (UID: \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.271664 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gghwf\" (UniqueName: \"kubernetes.io/projected/d4ff9542-77a8-4e10-b4a0-8ab831c57b35-kube-api-access-gghwf\") pod \"dns-default-wfbd9\" (UID: \"d4ff9542-77a8-4e10-b4a0-8ab831c57b35\") " pod="openshift-dns/dns-default-wfbd9" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.271682 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3677a387-0bac-4cea-9921-c93b14cd430e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drcgk\" (UID: \"3677a387-0bac-4cea-9921-c93b14cd430e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.271722 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/10fa9c2b-e370-400e-9e71-a4617592b411-registry-certificates\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.271740 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.271757 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2t74c\" (UID: \"a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.271775 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9plt8\" (UniqueName: \"kubernetes.io/projected/8237a118-001c-483c-8810-d051f33d35eb-kube-api-access-9plt8\") pod \"control-plane-machine-set-operator-78cbb6b69f-gqwld\" (UID: \"8237a118-001c-483c-8810-d051f33d35eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gqwld" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.271792 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/577cff0c-0386-467f-8a44-314a922051e2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8q9q7\" (UID: \"577cff0c-0386-467f-8a44-314a922051e2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.271871 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-plugins-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.272000 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49dae199-6b32-4904-862e-aff8cb8c4946-cert\") pod \"ingress-canary-526s7\" (UID: \"49dae199-6b32-4904-862e-aff8cb8c4946\") " pod="openshift-ingress-canary/ingress-canary-526s7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.272024 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18148611-705a-4276-a9e3-9659f38654a8-serving-cert\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.272055 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/92dde085-8a2b-4c9f-947f-441ea67b8622-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9w662\" (UID: \"92dde085-8a2b-4c9f-947f-441ea67b8622\") " pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.272170 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/31277b5e-7869-4612-ba40-dcd0a37153fb-webhook-cert\") pod \"packageserver-d55dfcdfc-b6ghj\" (UID: \"31277b5e-7869-4612-ba40-dcd0a37153fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.272193 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/18148611-705a-4276-a9e3-9659f38654a8-etcd-client\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.272484 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.273451 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.273637 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-audit-dir\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.274700 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2t74c\" (UID: \"a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" Feb 03 12:07:04 crc kubenswrapper[4820]: E0203 12:07:04.279469 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:04.779422978 +0000 UTC m=+142.302498842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.280157 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.280907 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.280932 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10fa9c2b-e370-400e-9e71-a4617592b411-trusted-ca\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.281915 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-audit-policies\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.283212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.285232 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.285732 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2t74c\" (UID: \"a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.286186 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl"] Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.286297 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.287598 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/10fa9c2b-e370-400e-9e71-a4617592b411-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.289803 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.293037 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/10fa9c2b-e370-400e-9e71-a4617592b411-registry-certificates\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: W0203 12:07:04.295094 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod120ba383_d275_47f8_b921_e976156f0035.slice/crio-69822743211e2733c7e28a6d0b4d1054d5a1fe9e19699260268543580137391d WatchSource:0}: Error finding container 69822743211e2733c7e28a6d0b4d1054d5a1fe9e19699260268543580137391d: Status 404 returned error can't find the container with id 69822743211e2733c7e28a6d0b4d1054d5a1fe9e19699260268543580137391d Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.295583 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.295612 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.295615 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.296403 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5-srv-cert\") pod \"catalog-operator-68c6474976-8rb2x\" (UID: \"d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.297963 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.298340 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-registry-tls\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.298630 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.308043 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.313622 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8237a118-001c-483c-8810-d051f33d35eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gqwld\" (UID: \"8237a118-001c-483c-8810-d051f33d35eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gqwld" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.313936 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5-profile-collector-cert\") pod \"catalog-operator-68c6474976-8rb2x\" (UID: \"d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.317493 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/10fa9c2b-e370-400e-9e71-a4617592b411-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.319367 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xnkl\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-kube-api-access-6xnkl\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.323007 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6aba4201-b6b2-4aed-adeb-513e9190efa9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kqqwj\" (UID: \"6aba4201-b6b2-4aed-adeb-513e9190efa9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kqqwj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.330945 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw8c5\" (UniqueName: \"kubernetes.io/projected/6aba4201-b6b2-4aed-adeb-513e9190efa9-kube-api-access-bw8c5\") pod \"multus-admission-controller-857f4d67dd-kqqwj\" (UID: \"6aba4201-b6b2-4aed-adeb-513e9190efa9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kqqwj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.352061 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-bound-sa-token\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.362556 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373549 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1b5d200f-9fc0-42f9-96e3-ebec60c47a05-signing-key\") pod \"service-ca-9c57cc56f-vdn7t\" (UID: \"1b5d200f-9fc0-42f9-96e3-ebec60c47a05\") " pod="openshift-service-ca/service-ca-9c57cc56f-vdn7t" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373591 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhs9g\" (UniqueName: \"kubernetes.io/projected/52996a75-b03e-40f5-a587-2c1476910cd4-kube-api-access-dhs9g\") pod \"olm-operator-6b444d44fb-c7gsf\" (UID: \"52996a75-b03e-40f5-a587-2c1476910cd4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373619 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18148611-705a-4276-a9e3-9659f38654a8-config\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373643 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7b253c2b-c072-449a-8a7a-915da7526653-node-bootstrap-token\") pod \"machine-config-server-lc7k5\" (UID: \"7b253c2b-c072-449a-8a7a-915da7526653\") " pod="openshift-machine-config-operator/machine-config-server-lc7k5" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373665 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz6mb\" (UniqueName: \"kubernetes.io/projected/1b5d200f-9fc0-42f9-96e3-ebec60c47a05-kube-api-access-lz6mb\") pod \"service-ca-9c57cc56f-vdn7t\" (UID: \"1b5d200f-9fc0-42f9-96e3-ebec60c47a05\") " pod="openshift-service-ca/service-ca-9c57cc56f-vdn7t" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373688 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5jlx\" (UniqueName: \"kubernetes.io/projected/92dde085-8a2b-4c9f-947f-441ea67b8622-kube-api-access-q5jlx\") pod \"marketplace-operator-79b997595-9w662\" (UID: \"92dde085-8a2b-4c9f-947f-441ea67b8622\") " pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373710 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/52996a75-b03e-40f5-a587-2c1476910cd4-srv-cert\") pod \"olm-operator-6b444d44fb-c7gsf\" (UID: \"52996a75-b03e-40f5-a587-2c1476910cd4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373732 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/31277b5e-7869-4612-ba40-dcd0a37153fb-tmpfs\") pod \"packageserver-d55dfcdfc-b6ghj\" (UID: \"31277b5e-7869-4612-ba40-dcd0a37153fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373745 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d4ff9542-77a8-4e10-b4a0-8ab831c57b35-metrics-tls\") pod \"dns-default-wfbd9\" (UID: \"d4ff9542-77a8-4e10-b4a0-8ab831c57b35\") " pod="openshift-dns/dns-default-wfbd9" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373759 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1b5d200f-9fc0-42f9-96e3-ebec60c47a05-signing-cabundle\") pod \"service-ca-9c57cc56f-vdn7t\" (UID: \"1b5d200f-9fc0-42f9-96e3-ebec60c47a05\") " pod="openshift-service-ca/service-ca-9c57cc56f-vdn7t" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373775 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/52996a75-b03e-40f5-a587-2c1476910cd4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c7gsf\" (UID: \"52996a75-b03e-40f5-a587-2c1476910cd4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373791 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d628ea-493d-4b0c-b4a2-194cef62a08e-secret-volume\") pod \"collect-profiles-29502000-hcscr\" (UID: \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373808 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7587d06-772a-477c-9503-4af59b74f082-config\") pod \"service-ca-operator-777779d784-5kfzf\" (UID: \"b7587d06-772a-477c-9503-4af59b74f082\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373828 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8mv8\" (UniqueName: \"kubernetes.io/projected/b9d628ea-493d-4b0c-b4a2-194cef62a08e-kube-api-access-l8mv8\") pod \"collect-profiles-29502000-hcscr\" (UID: \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373851 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7b253c2b-c072-449a-8a7a-915da7526653-certs\") pod \"machine-config-server-lc7k5\" (UID: \"7b253c2b-c072-449a-8a7a-915da7526653\") " pod="openshift-machine-config-operator/machine-config-server-lc7k5" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373874 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9vvc\" (UniqueName: \"kubernetes.io/projected/7b253c2b-c072-449a-8a7a-915da7526653-kube-api-access-b9vvc\") pod \"machine-config-server-lc7k5\" (UID: \"7b253c2b-c072-449a-8a7a-915da7526653\") " pod="openshift-machine-config-operator/machine-config-server-lc7k5" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373917 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-mountpoint-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373942 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7x2r\" (UniqueName: \"kubernetes.io/projected/51e0f75d-f0eb-4e04-a1ef-da8f256a845d-kube-api-access-f7x2r\") pod \"machine-config-operator-74547568cd-58ssn\" (UID: \"51e0f75d-f0eb-4e04-a1ef-da8f256a845d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373965 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3677a387-0bac-4cea-9921-c93b14cd430e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drcgk\" (UID: \"3677a387-0bac-4cea-9921-c93b14cd430e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.373983 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4ff9542-77a8-4e10-b4a0-8ab831c57b35-config-volume\") pod \"dns-default-wfbd9\" (UID: \"d4ff9542-77a8-4e10-b4a0-8ab831c57b35\") " pod="openshift-dns/dns-default-wfbd9" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92dde085-8a2b-4c9f-947f-441ea67b8622-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9w662\" (UID: \"92dde085-8a2b-4c9f-947f-441ea67b8622\") " pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq5hs\" (UniqueName: \"kubernetes.io/projected/18148611-705a-4276-a9e3-9659f38654a8-kube-api-access-fq5hs\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374051 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdf9b\" (UniqueName: \"kubernetes.io/projected/31277b5e-7869-4612-ba40-dcd0a37153fb-kube-api-access-wdf9b\") pod \"packageserver-d55dfcdfc-b6ghj\" (UID: \"31277b5e-7869-4612-ba40-dcd0a37153fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374074 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9plt8\" (UniqueName: \"kubernetes.io/projected/8237a118-001c-483c-8810-d051f33d35eb-kube-api-access-9plt8\") pod \"control-plane-machine-set-operator-78cbb6b69f-gqwld\" (UID: \"8237a118-001c-483c-8810-d051f33d35eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gqwld" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374090 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jm54\" (UniqueName: \"kubernetes.io/projected/b7587d06-772a-477c-9503-4af59b74f082-kube-api-access-5jm54\") pod \"service-ca-operator-777779d784-5kfzf\" (UID: \"b7587d06-772a-477c-9503-4af59b74f082\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374108 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-registration-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374127 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51e0f75d-f0eb-4e04-a1ef-da8f256a845d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-58ssn\" (UID: \"51e0f75d-f0eb-4e04-a1ef-da8f256a845d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374152 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv9x2\" (UniqueName: \"kubernetes.io/projected/b460558b-ba3e-4543-bb57-debddb0711e7-kube-api-access-jv9x2\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374174 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31277b5e-7869-4612-ba40-dcd0a37153fb-apiservice-cert\") pod \"packageserver-d55dfcdfc-b6ghj\" (UID: \"31277b5e-7869-4612-ba40-dcd0a37153fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374188 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-socket-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374217 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/18148611-705a-4276-a9e3-9659f38654a8-etcd-ca\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374235 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d628ea-493d-4b0c-b4a2-194cef62a08e-config-volume\") pod \"collect-profiles-29502000-hcscr\" (UID: \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374250 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gghwf\" (UniqueName: \"kubernetes.io/projected/d4ff9542-77a8-4e10-b4a0-8ab831c57b35-kube-api-access-gghwf\") pod \"dns-default-wfbd9\" (UID: \"d4ff9542-77a8-4e10-b4a0-8ab831c57b35\") " pod="openshift-dns/dns-default-wfbd9" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374265 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3677a387-0bac-4cea-9921-c93b14cd430e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drcgk\" (UID: \"3677a387-0bac-4cea-9921-c93b14cd430e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374281 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/577cff0c-0386-467f-8a44-314a922051e2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8q9q7\" (UID: \"577cff0c-0386-467f-8a44-314a922051e2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374303 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-plugins-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374325 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49dae199-6b32-4904-862e-aff8cb8c4946-cert\") pod \"ingress-canary-526s7\" (UID: \"49dae199-6b32-4904-862e-aff8cb8c4946\") " pod="openshift-ingress-canary/ingress-canary-526s7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374339 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18148611-705a-4276-a9e3-9659f38654a8-serving-cert\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374354 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/92dde085-8a2b-4c9f-947f-441ea67b8622-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9w662\" (UID: \"92dde085-8a2b-4c9f-947f-441ea67b8622\") " pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374371 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/31277b5e-7869-4612-ba40-dcd0a37153fb-webhook-cert\") pod \"packageserver-d55dfcdfc-b6ghj\" (UID: \"31277b5e-7869-4612-ba40-dcd0a37153fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374385 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/18148611-705a-4276-a9e3-9659f38654a8-etcd-client\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374402 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7587d06-772a-477c-9503-4af59b74f082-serving-cert\") pod \"service-ca-operator-777779d784-5kfzf\" (UID: \"b7587d06-772a-477c-9503-4af59b74f082\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374416 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/18148611-705a-4276-a9e3-9659f38654a8-etcd-service-ca\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374438 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g55j4\" (UniqueName: \"kubernetes.io/projected/577cff0c-0386-467f-8a44-314a922051e2-kube-api-access-g55j4\") pod \"package-server-manager-789f6589d5-8q9q7\" (UID: \"577cff0c-0386-467f-8a44-314a922051e2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374453 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvls7\" (UniqueName: \"kubernetes.io/projected/49dae199-6b32-4904-862e-aff8cb8c4946-kube-api-access-jvls7\") pod \"ingress-canary-526s7\" (UID: \"49dae199-6b32-4904-862e-aff8cb8c4946\") " pod="openshift-ingress-canary/ingress-canary-526s7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374469 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51e0f75d-f0eb-4e04-a1ef-da8f256a845d-proxy-tls\") pod \"machine-config-operator-74547568cd-58ssn\" (UID: \"51e0f75d-f0eb-4e04-a1ef-da8f256a845d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374484 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51e0f75d-f0eb-4e04-a1ef-da8f256a845d-images\") pod \"machine-config-operator-74547568cd-58ssn\" (UID: \"51e0f75d-f0eb-4e04-a1ef-da8f256a845d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374499 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3677a387-0bac-4cea-9921-c93b14cd430e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drcgk\" (UID: \"3677a387-0bac-4cea-9921-c93b14cd430e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374513 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-csi-data-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.374629 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-csi-data-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.375243 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7587d06-772a-477c-9503-4af59b74f082-config\") pod \"service-ca-operator-777779d784-5kfzf\" (UID: \"b7587d06-772a-477c-9503-4af59b74f082\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.376689 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d628ea-493d-4b0c-b4a2-194cef62a08e-config-volume\") pod \"collect-profiles-29502000-hcscr\" (UID: \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.377026 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-registration-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.377478 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51e0f75d-f0eb-4e04-a1ef-da8f256a845d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-58ssn\" (UID: \"51e0f75d-f0eb-4e04-a1ef-da8f256a845d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.379037 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/18148611-705a-4276-a9e3-9659f38654a8-etcd-service-ca\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: E0203 12:07:04.379251 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:04.879234133 +0000 UTC m=+142.402309997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.379286 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-socket-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.379382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/31277b5e-7869-4612-ba40-dcd0a37153fb-tmpfs\") pod \"packageserver-d55dfcdfc-b6ghj\" (UID: \"31277b5e-7869-4612-ba40-dcd0a37153fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.379652 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-mountpoint-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.379932 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1b5d200f-9fc0-42f9-96e3-ebec60c47a05-signing-cabundle\") pod \"service-ca-9c57cc56f-vdn7t\" (UID: \"1b5d200f-9fc0-42f9-96e3-ebec60c47a05\") " pod="openshift-service-ca/service-ca-9c57cc56f-vdn7t" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.380483 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4ff9542-77a8-4e10-b4a0-8ab831c57b35-config-volume\") pod \"dns-default-wfbd9\" (UID: \"d4ff9542-77a8-4e10-b4a0-8ab831c57b35\") " pod="openshift-dns/dns-default-wfbd9" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.381573 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92dde085-8a2b-4c9f-947f-441ea67b8622-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9w662\" (UID: \"92dde085-8a2b-4c9f-947f-441ea67b8622\") " pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.382100 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1b5d200f-9fc0-42f9-96e3-ebec60c47a05-signing-key\") pod \"service-ca-9c57cc56f-vdn7t\" (UID: \"1b5d200f-9fc0-42f9-96e3-ebec60c47a05\") " pod="openshift-service-ca/service-ca-9c57cc56f-vdn7t" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.382861 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51e0f75d-f0eb-4e04-a1ef-da8f256a845d-images\") pod \"machine-config-operator-74547568cd-58ssn\" (UID: \"51e0f75d-f0eb-4e04-a1ef-da8f256a845d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.382967 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b460558b-ba3e-4543-bb57-debddb0711e7-plugins-dir\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.383469 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3677a387-0bac-4cea-9921-c93b14cd430e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drcgk\" (UID: \"3677a387-0bac-4cea-9921-c93b14cd430e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.385709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7587d06-772a-477c-9503-4af59b74f082-serving-cert\") pod \"service-ca-operator-777779d784-5kfzf\" (UID: \"b7587d06-772a-477c-9503-4af59b74f082\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.386520 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18148611-705a-4276-a9e3-9659f38654a8-config\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.387041 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/18148611-705a-4276-a9e3-9659f38654a8-etcd-ca\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.387981 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/18148611-705a-4276-a9e3-9659f38654a8-etcd-client\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.390203 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/31277b5e-7869-4612-ba40-dcd0a37153fb-webhook-cert\") pod \"packageserver-d55dfcdfc-b6ghj\" (UID: \"31277b5e-7869-4612-ba40-dcd0a37153fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.393097 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/92dde085-8a2b-4c9f-947f-441ea67b8622-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9w662\" (UID: \"92dde085-8a2b-4c9f-947f-441ea67b8622\") " pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.394424 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7b253c2b-c072-449a-8a7a-915da7526653-certs\") pod \"machine-config-server-lc7k5\" (UID: \"7b253c2b-c072-449a-8a7a-915da7526653\") " pod="openshift-machine-config-operator/machine-config-server-lc7k5" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.395604 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d628ea-493d-4b0c-b4a2-194cef62a08e-secret-volume\") pod \"collect-profiles-29502000-hcscr\" (UID: \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.397298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d4ff9542-77a8-4e10-b4a0-8ab831c57b35-metrics-tls\") pod \"dns-default-wfbd9\" (UID: \"d4ff9542-77a8-4e10-b4a0-8ab831c57b35\") " pod="openshift-dns/dns-default-wfbd9" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.397575 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18148611-705a-4276-a9e3-9659f38654a8-serving-cert\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.397707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3677a387-0bac-4cea-9921-c93b14cd430e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drcgk\" (UID: \"3677a387-0bac-4cea-9921-c93b14cd430e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.398253 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7b253c2b-c072-449a-8a7a-915da7526653-node-bootstrap-token\") pod \"machine-config-server-lc7k5\" (UID: \"7b253c2b-c072-449a-8a7a-915da7526653\") " pod="openshift-machine-config-operator/machine-config-server-lc7k5" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.398970 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/577cff0c-0386-467f-8a44-314a922051e2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8q9q7\" (UID: \"577cff0c-0386-467f-8a44-314a922051e2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.399817 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/52996a75-b03e-40f5-a587-2c1476910cd4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c7gsf\" (UID: \"52996a75-b03e-40f5-a587-2c1476910cd4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.400162 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kqqwj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.400509 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/52996a75-b03e-40f5-a587-2c1476910cd4-srv-cert\") pod \"olm-operator-6b444d44fb-c7gsf\" (UID: \"52996a75-b03e-40f5-a587-2c1476910cd4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.403433 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2t74c\" (UID: \"a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.406374 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51e0f75d-f0eb-4e04-a1ef-da8f256a845d-proxy-tls\") pod \"machine-config-operator-74547568cd-58ssn\" (UID: \"51e0f75d-f0eb-4e04-a1ef-da8f256a845d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.409807 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm8gk\" (UniqueName: \"kubernetes.io/projected/a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4-kube-api-access-zm8gk\") pod \"cluster-image-registry-operator-dc59b4c8b-2t74c\" (UID: \"a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.423468 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31277b5e-7869-4612-ba40-dcd0a37153fb-apiservice-cert\") pod \"packageserver-d55dfcdfc-b6ghj\" (UID: \"31277b5e-7869-4612-ba40-dcd0a37153fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.427025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49dae199-6b32-4904-862e-aff8cb8c4946-cert\") pod \"ingress-canary-526s7\" (UID: \"49dae199-6b32-4904-862e-aff8cb8c4946\") " pod="openshift-ingress-canary/ingress-canary-526s7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.447551 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxjrn\" (UniqueName: \"kubernetes.io/projected/d79a4fe5-aca8-4046-8760-2892f6e3dc7d-kube-api-access-qxjrn\") pod \"migrator-59844c95c7-vmzb5\" (UID: \"d79a4fe5-aca8-4046-8760-2892f6e3dc7d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vmzb5" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.456434 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7h2x\" (UniqueName: \"kubernetes.io/projected/d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5-kube-api-access-p7h2x\") pod \"catalog-operator-68c6474976-8rb2x\" (UID: \"d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.475858 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:04 crc kubenswrapper[4820]: E0203 12:07:04.475967 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:04.975951335 +0000 UTC m=+142.499027199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.476017 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: E0203 12:07:04.476339 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:04.976331914 +0000 UTC m=+142.499407778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.501032 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc757\" (UniqueName: \"kubernetes.io/projected/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-kube-api-access-hc757\") pod \"oauth-openshift-558db77b4-4gskq\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.501505 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94"] Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.511271 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8mv8\" (UniqueName: \"kubernetes.io/projected/b9d628ea-493d-4b0c-b4a2-194cef62a08e-kube-api-access-l8mv8\") pod \"collect-profiles-29502000-hcscr\" (UID: \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.550707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdf9b\" (UniqueName: \"kubernetes.io/projected/31277b5e-7869-4612-ba40-dcd0a37153fb-kube-api-access-wdf9b\") pod \"packageserver-d55dfcdfc-b6ghj\" (UID: \"31277b5e-7869-4612-ba40-dcd0a37153fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.553268 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.555795 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d"] Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.560225 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp"] Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.568478 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jm54\" (UniqueName: \"kubernetes.io/projected/b7587d06-772a-477c-9503-4af59b74f082-kube-api-access-5jm54\") pod \"service-ca-operator-777779d784-5kfzf\" (UID: \"b7587d06-772a-477c-9503-4af59b74f082\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.570105 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.574067 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv9x2\" (UniqueName: \"kubernetes.io/projected/b460558b-ba3e-4543-bb57-debddb0711e7-kube-api-access-jv9x2\") pod \"csi-hostpathplugin-wsjsc\" (UID: \"b460558b-ba3e-4543-bb57-debddb0711e7\") " pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.579771 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:04 crc kubenswrapper[4820]: E0203 12:07:04.580335 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:05.080315389 +0000 UTC m=+142.603391253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.598754 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g55j4\" (UniqueName: \"kubernetes.io/projected/577cff0c-0386-467f-8a44-314a922051e2-kube-api-access-g55j4\") pod \"package-server-manager-789f6589d5-8q9q7\" (UID: \"577cff0c-0386-467f-8a44-314a922051e2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" Feb 03 12:07:04 crc kubenswrapper[4820]: W0203 12:07:04.603852 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d235feb_2891_4c16_b240_381a5810a0c7.slice/crio-4d9814c15fe5e8420ff625065ffdd0257c63cba540836ff13646f638ec338c1c WatchSource:0}: Error finding container 4d9814c15fe5e8420ff625065ffdd0257c63cba540836ff13646f638ec338c1c: Status 404 returned error can't find the container with id 4d9814c15fe5e8420ff625065ffdd0257c63cba540836ff13646f638ec338c1c Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.610707 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gqwld" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.610878 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz6mb\" (UniqueName: \"kubernetes.io/projected/1b5d200f-9fc0-42f9-96e3-ebec60c47a05-kube-api-access-lz6mb\") pod \"service-ca-9c57cc56f-vdn7t\" (UID: \"1b5d200f-9fc0-42f9-96e3-ebec60c47a05\") " pod="openshift-service-ca/service-ca-9c57cc56f-vdn7t" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.631460 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5jlx\" (UniqueName: \"kubernetes.io/projected/92dde085-8a2b-4c9f-947f-441ea67b8622-kube-api-access-q5jlx\") pod \"marketplace-operator-79b997595-9w662\" (UID: \"92dde085-8a2b-4c9f-947f-441ea67b8622\") " pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.652922 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhs9g\" (UniqueName: \"kubernetes.io/projected/52996a75-b03e-40f5-a587-2c1476910cd4-kube-api-access-dhs9g\") pod \"olm-operator-6b444d44fb-c7gsf\" (UID: \"52996a75-b03e-40f5-a587-2c1476910cd4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.680181 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9vvc\" (UniqueName: \"kubernetes.io/projected/7b253c2b-c072-449a-8a7a-915da7526653-kube-api-access-b9vvc\") pod \"machine-config-server-lc7k5\" (UID: \"7b253c2b-c072-449a-8a7a-915da7526653\") " pod="openshift-machine-config-operator/machine-config-server-lc7k5" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.681458 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: E0203 12:07:04.681872 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:05.181858046 +0000 UTC m=+142.704933920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.687393 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vmzb5" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.705865 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.710214 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.714014 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3677a387-0bac-4cea-9921-c93b14cd430e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drcgk\" (UID: \"3677a387-0bac-4cea-9921-c93b14cd430e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.715585 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7x2r\" (UniqueName: \"kubernetes.io/projected/51e0f75d-f0eb-4e04-a1ef-da8f256a845d-kube-api-access-f7x2r\") pod \"machine-config-operator-74547568cd-58ssn\" (UID: \"51e0f75d-f0eb-4e04-a1ef-da8f256a845d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.721506 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tw2nt"] Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.724010 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.729879 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.736786 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.737624 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvls7\" (UniqueName: \"kubernetes.io/projected/49dae199-6b32-4904-862e-aff8cb8c4946-kube-api-access-jvls7\") pod \"ingress-canary-526s7\" (UID: \"49dae199-6b32-4904-862e-aff8cb8c4946\") " pod="openshift-ingress-canary/ingress-canary-526s7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.744355 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.753052 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vdn7t" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.758237 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq5hs\" (UniqueName: \"kubernetes.io/projected/18148611-705a-4276-a9e3-9659f38654a8-kube-api-access-fq5hs\") pod \"etcd-operator-b45778765-k7tp7\" (UID: \"18148611-705a-4276-a9e3-9659f38654a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.761089 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.763442 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf"] Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.768527 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.773116 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gghwf\" (UniqueName: \"kubernetes.io/projected/d4ff9542-77a8-4e10-b4a0-8ab831c57b35-kube-api-access-gghwf\") pod \"dns-default-wfbd9\" (UID: \"d4ff9542-77a8-4e10-b4a0-8ab831c57b35\") " pod="openshift-dns/dns-default-wfbd9" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.776390 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.785976 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9"] Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.786351 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:04 crc kubenswrapper[4820]: E0203 12:07:04.786670 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:05.28665483 +0000 UTC m=+142.809730694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.796528 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.804818 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lc7k5" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.811130 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-526s7" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.829996 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" event={"ID":"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7","Type":"ContainerStarted","Data":"897ddbaeca671935b2339f0934b9fc6acf379014a2a6bd51466971be808b3f90"} Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.830032 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" event={"ID":"2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7","Type":"ContainerStarted","Data":"2a615d1628cc312766df00dfff6984f26a8f23668c5c6c4309f491b4a6f717d6"} Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.838930 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s"] Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.847311 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" event={"ID":"05797a22-690b-4b36-8b4e-5dcc739f7cad","Type":"ContainerStarted","Data":"8bf43d2afcda5b91937865aa4106f9fd21e0f58f105c00dd9695023a0e8ea599"} Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.847352 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.847366 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" event={"ID":"05797a22-690b-4b36-8b4e-5dcc739f7cad","Type":"ContainerStarted","Data":"7182fedced2758ecd7ecb9e7da64193d528b8761d3f257c54f2fdc1bd7f1fb6d"} Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.851914 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv"] Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.852345 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r" event={"ID":"120ba383-d275-47f8-b921-e976156f0035","Type":"ContainerStarted","Data":"85bc5555974a3930adf64e597ddcaeb5c008c6514ffa5577a868c06dac8f6c8d"} Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.852380 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r" event={"ID":"120ba383-d275-47f8-b921-e976156f0035","Type":"ContainerStarted","Data":"69822743211e2733c7e28a6d0b4d1054d5a1fe9e19699260268543580137391d"} Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.854399 4820 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cs8dg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.854431 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" podUID="05797a22-690b-4b36-8b4e-5dcc739f7cad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.919807 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:04 crc kubenswrapper[4820]: E0203 12:07:04.920975 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:05.420963336 +0000 UTC m=+142.944039200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.929721 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wfbd9" Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.947847 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl" event={"ID":"8cdbf888-563c-4590-bfbe-2bbb669e7ddb","Type":"ContainerStarted","Data":"4ba2ed4f5a822cc166b27edfa829273ca046c5ce5705be07bdb8a3f9076831e5"} Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.947910 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl" event={"ID":"8cdbf888-563c-4590-bfbe-2bbb669e7ddb","Type":"ContainerStarted","Data":"8f1157203fdb1c9d5efd28ec08e8b227ca00739f93329858c63f0e4d08f51900"} Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.961307 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" event={"ID":"4d235feb-2891-4c16-b240-381a5810a0c7","Type":"ContainerStarted","Data":"4d9814c15fe5e8420ff625065ffdd0257c63cba540836ff13646f638ec338c1c"} Feb 03 12:07:04 crc kubenswrapper[4820]: I0203 12:07:04.978664 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hxjbf"] Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:04.996248 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z7vmj"] Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.000663 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h22tk" event={"ID":"a227a161-8e53-4817-b7b2-48206c4916fb","Type":"ContainerStarted","Data":"97a9cee8bc0e37341482631e4c2ff59cd1251e537b8c02fbb7aff715ecad10d6"} Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.000702 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h22tk" event={"ID":"a227a161-8e53-4817-b7b2-48206c4916fb","Type":"ContainerStarted","Data":"f41a0e3042e10635ca613365ebc6852fb0ad431eb010231e317eb7c0a52dbf53"} Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.002090 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kqqwj"] Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.014732 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.021022 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:05 crc kubenswrapper[4820]: E0203 12:07:05.021243 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:05.521169972 +0000 UTC m=+143.044245836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.021298 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:05 crc kubenswrapper[4820]: E0203 12:07:05.021648 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:05.521632843 +0000 UTC m=+143.044708777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.032661 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lnc22" event={"ID":"876c5dc3-b775-45cc-94b6-4339735e9975","Type":"ContainerStarted","Data":"6c4488126847dd3de8d2a9eb16836456561cd827e52a95954ae608cf60a52482"} Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.033375 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.035562 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d" event={"ID":"319cad33-b4bc-4249-8124-1010cd6d79f9","Type":"ContainerStarted","Data":"2b9ff443c6713f98cf878ce05590af56512e44bccba3fa0b04a8a4b87b23538a"} Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.038087 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r8785" event={"ID":"29fa9711-dd2f-41bf-92dc-a6fd88a3f341","Type":"ContainerStarted","Data":"4669f8f6800cee7a42806dbb15ff860fa4ce93da012a7a048eb9e7b1404144ae"} Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.038133 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r8785" event={"ID":"29fa9711-dd2f-41bf-92dc-a6fd88a3f341","Type":"ContainerStarted","Data":"2d91f088fe4c63e5b0979624dfe1b8e87ba0a74d580d1761888848b9c207ee0a"} Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.039721 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" event={"ID":"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd","Type":"ContainerStarted","Data":"c1005069446ed9558e609003712a7c59909c90aba6f26c2abb37b4d21e748635"} Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.041360 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sf69z" event={"ID":"29d2a7e9-1fcb-4213-ae6c-753953bfae1a","Type":"ContainerStarted","Data":"5c8183ebc80e0bcd3878772901f92f1aeb787dd9a58296c49db3aa77733d9884"} Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.041382 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sf69z" event={"ID":"29d2a7e9-1fcb-4213-ae6c-753953bfae1a","Type":"ContainerStarted","Data":"90ce6601baf02833adf79a8d48fde10dfe66550b361315deb215a01ba5585ee2"} Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.041637 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.044537 4820 generic.go:334] "Generic (PLEG): container finished" podID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerID="d6da8cd0f1436a7c439bc6c41097ac86d3373d13da8abc8abbf6058cb66e0a04" exitCode=0 Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.045090 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" event={"ID":"c93c42c7-c9ff-42cc-b604-e36f7a063fcf","Type":"ContainerDied","Data":"d6da8cd0f1436a7c439bc6c41097ac86d3373d13da8abc8abbf6058cb66e0a04"} Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.045129 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" event={"ID":"c93c42c7-c9ff-42cc-b604-e36f7a063fcf","Type":"ContainerStarted","Data":"45b1c4984db64281503cf3c5ce027bc7f5fca1272c5aa76385fa2c80f34c227f"} Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.049228 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" event={"ID":"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8","Type":"ContainerStarted","Data":"f1a30bc906bf3cbb26a87046812f2acf49af38fa613535b15d6e11fd8304f36e"} Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.049279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" event={"ID":"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8","Type":"ContainerStarted","Data":"8d4de06c3e43869e3232a1e8d0fccdea2526e5ba77d0841dd667f4fa563cd00b"} Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.049526 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.049918 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp" event={"ID":"1e2ff1f0-ab87-4251-b1ea-c08cad288246","Type":"ContainerStarted","Data":"8740f1360f1483469a0de90c4c5c900944356310f82332791fa9847cfc7d010f"} Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.055761 4820 patch_prober.go:28] interesting pod/console-operator-58897d9998-sf69z container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.055841 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sf69z" podUID="29d2a7e9-1fcb-4213-ae6c-753953bfae1a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.055923 4820 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lt75x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.055950 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" podUID="1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.056001 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.056052 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.094682 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.099038 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.099087 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.112302 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4gskq"] Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.124550 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:05 crc kubenswrapper[4820]: E0203 12:07:05.125679 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:05.625663329 +0000 UTC m=+143.148739193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:05 crc kubenswrapper[4820]: W0203 12:07:05.157234 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aba4201_b6b2_4aed_adeb_513e9190efa9.slice/crio-65a5d2542faded185451126a95996d497125ae1fba2b8408fa8fa0f6b5fcd493 WatchSource:0}: Error finding container 65a5d2542faded185451126a95996d497125ae1fba2b8408fa8fa0f6b5fcd493: Status 404 returned error can't find the container with id 65a5d2542faded185451126a95996d497125ae1fba2b8408fa8fa0f6b5fcd493 Feb 03 12:07:05 crc kubenswrapper[4820]: W0203 12:07:05.161032 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b522a8e_f795_4cf1_adbb_899674a5e359.slice/crio-ffd8de6c76d6eb8035bebb356a4bf6063c4a5ed295e8982c753018bae9234709 WatchSource:0}: Error finding container ffd8de6c76d6eb8035bebb356a4bf6063c4a5ed295e8982c753018bae9234709: Status 404 returned error can't find the container with id ffd8de6c76d6eb8035bebb356a4bf6063c4a5ed295e8982c753018bae9234709 Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.203631 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c"] Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.225836 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:05 crc kubenswrapper[4820]: E0203 12:07:05.227108 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:05.727092933 +0000 UTC m=+143.250168797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.332743 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.332929 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vmzb5"] Feb 03 12:07:05 crc kubenswrapper[4820]: E0203 12:07:05.333224 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:05.833204718 +0000 UTC m=+143.356280582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.347145 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf"] Feb 03 12:07:05 crc kubenswrapper[4820]: W0203 12:07:05.370300 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b253c2b_c072_449a_8a7a_915da7526653.slice/crio-d571f52fe68b58045aabf35ea9e39277a38a3a877db40a0f4ca9368c630b973f WatchSource:0}: Error finding container d571f52fe68b58045aabf35ea9e39277a38a3a877db40a0f4ca9368c630b973f: Status 404 returned error can't find the container with id d571f52fe68b58045aabf35ea9e39277a38a3a877db40a0f4ca9368c630b973f Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.399220 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gqwld"] Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.434913 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:05 crc kubenswrapper[4820]: E0203 12:07:05.435397 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:05.93538394 +0000 UTC m=+143.458459804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.535619 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:05 crc kubenswrapper[4820]: E0203 12:07:05.536181 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:06.036150698 +0000 UTC m=+143.559226572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.636866 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:05 crc kubenswrapper[4820]: E0203 12:07:05.637300 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:06.137288765 +0000 UTC m=+143.660364629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.762388 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:05 crc kubenswrapper[4820]: E0203 12:07:05.763044 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:06.263029538 +0000 UTC m=+143.786105402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.873427 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:05 crc kubenswrapper[4820]: E0203 12:07:05.873886 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:06.373855606 +0000 UTC m=+143.896931470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:05 crc kubenswrapper[4820]: I0203 12:07:05.975438 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:05 crc kubenswrapper[4820]: E0203 12:07:05.975801 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:06.475751981 +0000 UTC m=+143.998827845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.062013 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x"] Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.111872 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:06 crc kubenswrapper[4820]: E0203 12:07:06.112231 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:06.612218739 +0000 UTC m=+144.135294603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.176931 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn"] Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.209165 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wsjsc"] Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.214009 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:06 crc kubenswrapper[4820]: E0203 12:07:06.214509 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:06.714490393 +0000 UTC m=+144.237566287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.232447 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" event={"ID":"4d235feb-2891-4c16-b240-381a5810a0c7","Type":"ContainerStarted","Data":"0175e8e83b4aa6d3c0efc36ba4e8dccbc9edb80adf1a0ae2eb9f63b1d7a594de"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.237315 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:06 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:06 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:06 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.237368 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.351762 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" podStartSLOduration=123.35174806 podStartE2EDuration="2m3.35174806s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:06.249431325 +0000 UTC m=+143.772507189" watchObservedRunningTime="2026-02-03 12:07:06.35174806 +0000 UTC m=+143.874823924" Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.353012 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" podStartSLOduration=122.35300651 podStartE2EDuration="2m2.35300651s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:06.350596482 +0000 UTC m=+143.873672346" watchObservedRunningTime="2026-02-03 12:07:06.35300651 +0000 UTC m=+143.876082364" Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.353454 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lc7k5" event={"ID":"7b253c2b-c072-449a-8a7a-915da7526653","Type":"ContainerStarted","Data":"d571f52fe68b58045aabf35ea9e39277a38a3a877db40a0f4ca9368c630b973f"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.354470 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:06 crc kubenswrapper[4820]: E0203 12:07:06.354732 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:06.85472365 +0000 UTC m=+144.377799504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.379067 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-lnc22" podStartSLOduration=123.37905043 podStartE2EDuration="2m3.37905043s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:06.378019405 +0000 UTC m=+143.901095269" watchObservedRunningTime="2026-02-03 12:07:06.37905043 +0000 UTC m=+143.902126294" Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.458108 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:06 crc kubenswrapper[4820]: E0203 12:07:06.458485 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:06.958469819 +0000 UTC m=+144.481545683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.490256 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" event={"ID":"a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4","Type":"ContainerStarted","Data":"ecd6efa2f5a9fca4b4bc940dd7bc92ca4f3e097dc7f21dc5f3f3d1927094f692"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.569220 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:06 crc kubenswrapper[4820]: E0203 12:07:06.571398 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:07.071379247 +0000 UTC m=+144.594455111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.670231 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:06 crc kubenswrapper[4820]: E0203 12:07:06.670715 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:07.170700231 +0000 UTC m=+144.693776095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.674618 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" podStartSLOduration=123.674605594 podStartE2EDuration="2m3.674605594s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:06.505788436 +0000 UTC m=+144.028864320" watchObservedRunningTime="2026-02-03 12:07:06.674605594 +0000 UTC m=+144.197681458" Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.675018 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tw2nt" event={"ID":"b06753a3-652a-4acc-b294-3ccaa5b0cb99","Type":"ContainerStarted","Data":"81295f2fff4c34ff47ab98869ba2efd1664b5586bdc415d379e316b46aa29a2a"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.675046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tw2nt" event={"ID":"b06753a3-652a-4acc-b294-3ccaa5b0cb99","Type":"ContainerStarted","Data":"7ba84dcbcc9cf553c89692e751e7595ff3e12e510dcf499433f8238f212c4c13"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.717128 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kqqwj" event={"ID":"6aba4201-b6b2-4aed-adeb-513e9190efa9","Type":"ContainerStarted","Data":"65a5d2542faded185451126a95996d497125ae1fba2b8408fa8fa0f6b5fcd493"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.723722 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s" event={"ID":"1339ee72-a846-4147-b494-55ef92897378","Type":"ContainerStarted","Data":"5bc3b665f518625e65bd3b2fa6f59651b6bfda99411dba4ce27b815fd7a03e14"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.732252 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" event={"ID":"6b522a8e-f795-4cf1-adbb-899674a5e359","Type":"ContainerStarted","Data":"ffd8de6c76d6eb8035bebb356a4bf6063c4a5ed295e8982c753018bae9234709"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.741474 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" event={"ID":"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd","Type":"ContainerStarted","Data":"10484f53002fe2990039ef576be7c1c1a6a5620329d9b08eeb2482a06ff4b98c"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.745607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" event={"ID":"52996a75-b03e-40f5-a587-2c1476910cd4","Type":"ContainerStarted","Data":"510ec6554d144721691dd26d9f048c62263961b9d87a17c5376964ae3ff1b5ef"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.763229 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-74h4r" podStartSLOduration=123.763210473 podStartE2EDuration="2m3.763210473s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:06.762285071 +0000 UTC m=+144.285360945" watchObservedRunningTime="2026-02-03 12:07:06.763210473 +0000 UTC m=+144.286286337" Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.789815 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:06 crc kubenswrapper[4820]: E0203 12:07:06.792560 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:07.292548981 +0000 UTC m=+144.815624845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.810332 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d" event={"ID":"319cad33-b4bc-4249-8124-1010cd6d79f9","Type":"ContainerStarted","Data":"f58e59ca3951c468b276124436f6e306e88a44b59157665db1bc6002859f4afc"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.895092 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:06 crc kubenswrapper[4820]: E0203 12:07:06.895421 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:07.395406539 +0000 UTC m=+144.918482403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.949335 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v4cfl" podStartSLOduration=123.949319053 podStartE2EDuration="2m3.949319053s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:06.94837212 +0000 UTC m=+144.471447984" watchObservedRunningTime="2026-02-03 12:07:06.949319053 +0000 UTC m=+144.472394927" Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.949655 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" event={"ID":"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e","Type":"ContainerStarted","Data":"642123c19ffc9b7762d9d3d9fd39dc5b99e9f95a5a7d8f31ad0b9949f91e66f7"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.951568 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" event={"ID":"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57","Type":"ContainerStarted","Data":"69abd8dca0a8397ac8b13b701e3e8a306cb0d895d2962b5d945623128b6fe51e"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.959796 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vmzb5" event={"ID":"d79a4fe5-aca8-4046-8760-2892f6e3dc7d","Type":"ContainerStarted","Data":"4980821b50c501f7d6baec8f5179f0b6ed03da6203df96de4d803c1b93538520"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.987956 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gqwld" event={"ID":"8237a118-001c-483c-8810-d051f33d35eb","Type":"ContainerStarted","Data":"70eb8f9b362a2a72646bfd08b88e281550469604efc6c9e21d47621250bd27ff"} Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.997591 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:06 crc kubenswrapper[4820]: E0203 12:07:06.997905 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:07.497879059 +0000 UTC m=+145.020954923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:06 crc kubenswrapper[4820]: I0203 12:07:06.998683 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv" event={"ID":"11e33088-50eb-423a-8925-87aa760c56e4","Type":"ContainerStarted","Data":"91c81a2f10026149215c75949cfd0911e7c6b0d605e867589f3bfffca95f5373"} Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.105973 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:07 crc kubenswrapper[4820]: E0203 12:07:07.106673 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:07.606647207 +0000 UTC m=+145.129723071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.124694 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.208643 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:07 crc kubenswrapper[4820]: E0203 12:07:07.213972 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:07.713954331 +0000 UTC m=+145.237030285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.255633 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.255688 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.256701 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-sf69z" podStartSLOduration=124.256691409 podStartE2EDuration="2m4.256691409s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:07.25632799 +0000 UTC m=+144.779403854" watchObservedRunningTime="2026-02-03 12:07:07.256691409 +0000 UTC m=+144.779767273" Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.313969 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.317315 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:07 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:07 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:07 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.317358 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:07 crc kubenswrapper[4820]: E0203 12:07:07.317814 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:07.817791403 +0000 UTC m=+145.340867267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.319620 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h22tk" podStartSLOduration=123.319596595 podStartE2EDuration="2m3.319596595s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:07.314125936 +0000 UTC m=+144.837201810" watchObservedRunningTime="2026-02-03 12:07:07.319596595 +0000 UTC m=+144.842672459" Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.395825 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.395852 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" event={"ID":"35ef1add-69b2-424c-b5ff-7f18b915eae1","Type":"ContainerStarted","Data":"bdc839e9116c1156bcacf5a0e6a26f99201e00c9039bb45e64d114bbeebd47ed"} Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.395880 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.395941 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp" event={"ID":"1e2ff1f0-ab87-4251-b1ea-c08cad288246","Type":"ContainerStarted","Data":"587fe29d1cacd224d7dbe2ad88e2562a2c36ec18d9e9edbc100057b78c4818db"} Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.395952 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r8785" event={"ID":"29fa9711-dd2f-41bf-92dc-a6fd88a3f341","Type":"ContainerStarted","Data":"2f7bac3a505be82f5f663dbf749022ec34eee736ec64eb9e2a541aa8cdfcd90e"} Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.395961 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" event={"ID":"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052","Type":"ContainerStarted","Data":"87a9f7ee13a5c6706f7113ff27ee604f7e1b67876ce6ebb85672468908e07838"} Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.502332 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:07 crc kubenswrapper[4820]: E0203 12:07:07.502771 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:08.002755845 +0000 UTC m=+145.525831709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.577653 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-sf69z" Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.604557 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:07 crc kubenswrapper[4820]: E0203 12:07:07.605038 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:08.105019029 +0000 UTC m=+145.628094903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.707775 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:07 crc kubenswrapper[4820]: E0203 12:07:07.708137 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:08.208126943 +0000 UTC m=+145.731202807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:07 crc kubenswrapper[4820]: I0203 12:07:07.808586 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:07 crc kubenswrapper[4820]: E0203 12:07:07.809186 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:08.309141367 +0000 UTC m=+145.832217231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:07.956119 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:08 crc kubenswrapper[4820]: E0203 12:07:07.956517 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:08.456500544 +0000 UTC m=+145.979576408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.029138 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.057374 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:08 crc kubenswrapper[4820]: E0203 12:07:08.057915 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:08.557877958 +0000 UTC m=+146.080953822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.113667 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-tw2nt" podStartSLOduration=125.113647185 podStartE2EDuration="2m5.113647185s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:08.111285709 +0000 UTC m=+145.634361593" watchObservedRunningTime="2026-02-03 12:07:08.113647185 +0000 UTC m=+145.636723059" Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.159548 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:08 crc kubenswrapper[4820]: E0203 12:07:08.159948 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:08.659905336 +0000 UTC m=+146.182981200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.226476 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:08 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:08 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:08 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.226551 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.228417 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-r8785" podStartSLOduration=125.228400987 podStartE2EDuration="2m5.228400987s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:08.226831558 +0000 UTC m=+145.749907422" watchObservedRunningTime="2026-02-03 12:07:08.228400987 +0000 UTC m=+145.751476851" Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.253361 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" event={"ID":"51e0f75d-f0eb-4e04-a1ef-da8f256a845d","Type":"ContainerStarted","Data":"c58675f02031686f0e5ec64c9f822a0a0309f923adf7b068e4d629edf7ca2930"} Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.258768 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv" event={"ID":"11e33088-50eb-423a-8925-87aa760c56e4","Type":"ContainerStarted","Data":"fb4371498ef80c9765818c4113ab300e319f8b61d8550325c9f19e5b6ffa9d4b"} Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.274597 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" event={"ID":"c93c42c7-c9ff-42cc-b604-e36f7a063fcf","Type":"ContainerStarted","Data":"49a0cd07642e7cb199747437fa1a228fee76d4f3d23348f202c0e4834408659f"} Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.277820 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" event={"ID":"52996a75-b03e-40f5-a587-2c1476910cd4","Type":"ContainerStarted","Data":"a433c5427df2b2518bb7e37afc62691b689b95b5d59ad231e12088b70bdcd694"} Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.278966 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.280852 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" event={"ID":"a6ff59c4-dee8-4ed7-86f9-601df8b4e7e4","Type":"ContainerStarted","Data":"bff66a66b3828b61cf58272c73d3752dd257fbb191f4176365ea16391e57bba7"} Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.286119 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:08 crc kubenswrapper[4820]: E0203 12:07:08.287569 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:08.787546874 +0000 UTC m=+146.310622738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.330796 4820 generic.go:334] "Generic (PLEG): container finished" podID="35ef1add-69b2-424c-b5ff-7f18b915eae1" containerID="6443ab253fa5fcf40e3057f3e07dd9ccf508aa11378a7ec89c436fa45a6bf61a" exitCode=0 Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.330912 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" event={"ID":"35ef1add-69b2-424c-b5ff-7f18b915eae1","Type":"ContainerDied","Data":"6443ab253fa5fcf40e3057f3e07dd9ccf508aa11378a7ec89c436fa45a6bf61a"} Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.341456 4820 generic.go:334] "Generic (PLEG): container finished" podID="4acfe638-6e10-4d68-9cfa-3d1e1d4c1052" containerID="d2c4425ead7d9f49a125b2bd8c4d01137ca391cb7fd9adc902dcb6c96ab0622d" exitCode=0 Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.341539 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" event={"ID":"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052","Type":"ContainerDied","Data":"d2c4425ead7d9f49a125b2bd8c4d01137ca391cb7fd9adc902dcb6c96ab0622d"} Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.355941 4820 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-c7gsf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.356044 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" podUID="52996a75-b03e-40f5-a587-2c1476910cd4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.476851 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.478064 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c9vjp" podStartSLOduration=124.478034388 podStartE2EDuration="2m4.478034388s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:08.475354754 +0000 UTC m=+145.998430618" watchObservedRunningTime="2026-02-03 12:07:08.478034388 +0000 UTC m=+146.001110242" Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.482359 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" event={"ID":"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57","Type":"ContainerStarted","Data":"548228d583bdb1878d4c7760a6e94804ffc2fc2b8ef1901eed157e11854a1fa7"} Feb 03 12:07:08 crc kubenswrapper[4820]: E0203 12:07:08.482660 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:08.982643897 +0000 UTC m=+146.505719841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.482903 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk"] Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.574118 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s" event={"ID":"1339ee72-a846-4147-b494-55ef92897378","Type":"ContainerStarted","Data":"d39cfd58abbf64164e1ffb0efdcb5c66e4841ede2a3b3b00bca805cd40a7e329"} Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.580995 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:08 crc kubenswrapper[4820]: E0203 12:07:08.581478 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:09.08146637 +0000 UTC m=+146.604542234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.590494 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" event={"ID":"6b522a8e-f795-4cf1-adbb-899674a5e359","Type":"ContainerStarted","Data":"58f7f21ca89de99b7fb3075f3b50d175e8a5209254f4792af605e66a98b39561"} Feb 03 12:07:08 crc kubenswrapper[4820]: W0203 12:07:08.594837 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3677a387_0bac_4cea_9921_c93b14cd430e.slice/crio-d6b288233c0615f3147e17e447261214e8739fc40bca7aa8e39c2419ee4354e7 WatchSource:0}: Error finding container d6b288233c0615f3147e17e447261214e8739fc40bca7aa8e39c2419ee4354e7: Status 404 returned error can't find the container with id d6b288233c0615f3147e17e447261214e8739fc40bca7aa8e39c2419ee4354e7 Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.840346 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:08 crc kubenswrapper[4820]: E0203 12:07:08.840680 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:09.340653569 +0000 UTC m=+146.863729423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.903797 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" event={"ID":"b460558b-ba3e-4543-bb57-debddb0711e7","Type":"ContainerStarted","Data":"280805b182b4873b1ac1b86d054a9cedf8279e207874d97aec6fb43272c36080"} Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.907944 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vdn7t"] Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.913746 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rc96d" podStartSLOduration=124.913725277 podStartE2EDuration="2m4.913725277s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:08.903999576 +0000 UTC m=+146.427075440" watchObservedRunningTime="2026-02-03 12:07:08.913725277 +0000 UTC m=+146.436801141" Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.934800 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" event={"ID":"d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5","Type":"ContainerStarted","Data":"efc52eca5d7955097c44558fc46d87f73aa7b7bfe9997b7492fcad5b2d2b1957"} Feb 03 12:07:08 crc kubenswrapper[4820]: I0203 12:07:08.958547 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:08 crc kubenswrapper[4820]: E0203 12:07:08.981666 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:09.481638754 +0000 UTC m=+147.004714618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.037123 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf"] Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.061060 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k7tp7"] Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.063934 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podStartSLOduration=126.063868931 podStartE2EDuration="2m6.063868931s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:09.000238287 +0000 UTC m=+146.523314161" watchObservedRunningTime="2026-02-03 12:07:09.063868931 +0000 UTC m=+146.586944795" Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.075506 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" podStartSLOduration=125.075490338 podStartE2EDuration="2m5.075490338s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:09.017108748 +0000 UTC m=+146.540184612" watchObservedRunningTime="2026-02-03 12:07:09.075490338 +0000 UTC m=+146.598566202" Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.079412 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:09 crc kubenswrapper[4820]: E0203 12:07:09.088105 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:09.588089058 +0000 UTC m=+147.111164922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.159168 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:09 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:09 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:09 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.159234 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.205023 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:09 crc kubenswrapper[4820]: E0203 12:07:09.206362 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:09.705691657 +0000 UTC m=+147.228767521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.306637 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:09 crc kubenswrapper[4820]: E0203 12:07:09.307049 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:09.807035909 +0000 UTC m=+147.330111773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.408492 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:09 crc kubenswrapper[4820]: E0203 12:07:09.409541 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:09.909508997 +0000 UTC m=+147.432584871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.495997 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7"] Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.496028 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr"] Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.496038 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wfbd9"] Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.496048 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj"] Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.496062 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9w662"] Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.496072 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-526s7"] Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.512738 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:09 crc kubenswrapper[4820]: E0203 12:07:09.513121 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:10.013104333 +0000 UTC m=+147.536180257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.538781 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2t74c" podStartSLOduration=126.538757343 podStartE2EDuration="2m6.538757343s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:09.473370718 +0000 UTC m=+146.996446582" watchObservedRunningTime="2026-02-03 12:07:09.538757343 +0000 UTC m=+147.061833207" Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.539415 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2jnbv" podStartSLOduration=125.539408419 podStartE2EDuration="2m5.539408419s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:09.539117313 +0000 UTC m=+147.062193197" watchObservedRunningTime="2026-02-03 12:07:09.539408419 +0000 UTC m=+147.062484283" Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.613586 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:09 crc kubenswrapper[4820]: E0203 12:07:09.614110 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:10.114088696 +0000 UTC m=+147.637164560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.717096 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:09 crc kubenswrapper[4820]: E0203 12:07:09.717719 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:10.217708173 +0000 UTC m=+147.740784027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.840505 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:09 crc kubenswrapper[4820]: E0203 12:07:09.840725 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:10.34069815 +0000 UTC m=+147.863774014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.840862 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:09 crc kubenswrapper[4820]: E0203 12:07:09.841385 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:10.341347766 +0000 UTC m=+147.864423640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.849934 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.849945 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.849983 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.849999 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 12:07:09 crc kubenswrapper[4820]: I0203 12:07:09.962685 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:09 crc kubenswrapper[4820]: E0203 12:07:09.963080 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:10.463064943 +0000 UTC m=+147.986140807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.156395 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:10 crc kubenswrapper[4820]: E0203 12:07:10.157033 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:10.657022209 +0000 UTC m=+148.180098073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.184958 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" event={"ID":"577cff0c-0386-467f-8a44-314a922051e2","Type":"ContainerStarted","Data":"c80a4dbc743e625b6dabbb2ff6fa7ceb6058e883f747a0971302876b39b6f646"} Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.188467 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" event={"ID":"6b39f4b8-90e3-4d3b-a000-725d42cdb8dd","Type":"ContainerStarted","Data":"976cc647e6802bb824fedf111e773432f513933c697d4d73cb173df0b0f9a2a4"} Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.192070 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wfbd9" event={"ID":"d4ff9542-77a8-4e10-b4a0-8ab831c57b35","Type":"ContainerStarted","Data":"ad43a47e0ea71827216a222b80cdfec02ea8131864e3519463b0e9a0195c77fb"} Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.193551 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vmzb5" event={"ID":"d79a4fe5-aca8-4046-8760-2892f6e3dc7d","Type":"ContainerStarted","Data":"7277e5695e79c84c2f1718e53ac82970cadc18a38059fbd0eef065895bd8d71d"} Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.195029 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lc7k5" event={"ID":"7b253c2b-c072-449a-8a7a-915da7526653","Type":"ContainerStarted","Data":"ccc80ba76f45faf091896d05b3a398ab137757246ec01371d09bf2053c99c179"} Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.198129 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" event={"ID":"18148611-705a-4276-a9e3-9659f38654a8","Type":"ContainerStarted","Data":"ebfd0fa3653b6988a41cc7f484ebe2383b5007d23eeeb3d1a38c5b03659ba938"} Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.199536 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" event={"ID":"31277b5e-7869-4612-ba40-dcd0a37153fb","Type":"ContainerStarted","Data":"36f69f15c0e71736410dd5e9f8954d62034b40218022b715285df42cc3951261"} Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.205525 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-526s7" event={"ID":"49dae199-6b32-4904-862e-aff8cb8c4946","Type":"ContainerStarted","Data":"113bd38b672806ba7610938758235cbcf1c0414ed11c0b64708c6654ad57fc28"} Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.206411 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" event={"ID":"92dde085-8a2b-4c9f-947f-441ea67b8622","Type":"ContainerStarted","Data":"d2b318319736f3cb7e1c73b7e726851ac180e32612c04089e75bdbb329118fc5"} Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.207150 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" event={"ID":"b9d628ea-493d-4b0c-b4a2-194cef62a08e","Type":"ContainerStarted","Data":"59d4d969c3658fe59babb4548184a05b5bd3a27ff5982adc2781c89fb0dbeb94"} Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.207805 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vdn7t" event={"ID":"1b5d200f-9fc0-42f9-96e3-ebec60c47a05","Type":"ContainerStarted","Data":"aa9f83f95c0d1b3e7ae7ad98e85c53c27dc6e201415334f301ec71e8b7aadf0e"} Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.208378 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf" event={"ID":"b7587d06-772a-477c-9503-4af59b74f082","Type":"ContainerStarted","Data":"b551a928d5b5d2de7ba1fd914a5c4059bd6648c9c2d6e454e8152bb294decd7a"} Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.210144 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.210864 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk" event={"ID":"3677a387-0bac-4cea-9921-c93b14cd430e","Type":"ContainerStarted","Data":"d6b288233c0615f3147e17e447261214e8739fc40bca7aa8e39c2419ee4354e7"} Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.212568 4820 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4gskq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.212610 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" podUID="0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.217393 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:10 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:10 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:10 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.217460 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.247175 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fjkt" podStartSLOduration=127.247154914 podStartE2EDuration="2m7.247154914s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:10.243385665 +0000 UTC m=+147.766461519" watchObservedRunningTime="2026-02-03 12:07:10.247154914 +0000 UTC m=+147.770230778" Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.251065 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" event={"ID":"51e0f75d-f0eb-4e04-a1ef-da8f256a845d","Type":"ContainerStarted","Data":"4d8839100ce5e872fdda4a05009dd2f82ff2a0144516834ccb06ee2d236babe3"} Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.251286 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.251336 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.258648 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:10 crc kubenswrapper[4820]: E0203 12:07:10.265817 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:10.765785268 +0000 UTC m=+148.288861132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.274482 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:10 crc kubenswrapper[4820]: E0203 12:07:10.275003 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:10.774985966 +0000 UTC m=+148.298061830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.319223 4820 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-c7gsf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.319518 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" podUID="52996a75-b03e-40f5-a587-2c1476910cd4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.591497 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:10 crc kubenswrapper[4820]: E0203 12:07:10.592069 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:11.092049673 +0000 UTC m=+148.615125537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.985282 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.994418 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.995246 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:10 crc kubenswrapper[4820]: E0203 12:07:10.998004 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:11.497978575 +0000 UTC m=+149.021054439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.999150 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:10 crc kubenswrapper[4820]: I0203 12:07:10.999492 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.008130 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.036835 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.050598 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.072555 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.075709 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-lc7k5" podStartSLOduration=10.075694774 podStartE2EDuration="10.075694774s" podCreationTimestamp="2026-02-03 12:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:10.997553024 +0000 UTC m=+148.520632378" watchObservedRunningTime="2026-02-03 12:07:11.075694774 +0000 UTC m=+148.598770638" Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.100233 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:11 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:11 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:11 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.156952 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.385499 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.389565 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.389832 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.390221 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:11 crc kubenswrapper[4820]: E0203 12:07:11.391218 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:11.891179572 +0000 UTC m=+149.414255436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.411300 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s" event={"ID":"1339ee72-a846-4147-b494-55ef92897378","Type":"ContainerStarted","Data":"17915de27ff49f02cbecad30120a4db75ac2403198f8bb63b2525f6aae5a6073"} Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.494543 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:11 crc kubenswrapper[4820]: E0203 12:07:11.494996 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:11.994982264 +0000 UTC m=+149.518058128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.597099 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:11 crc kubenswrapper[4820]: E0203 12:07:11.600030 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:12.100006133 +0000 UTC m=+149.623081997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.634034 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" podStartSLOduration=128.634015552 podStartE2EDuration="2m8.634015552s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:11.075874418 +0000 UTC m=+148.598950282" watchObservedRunningTime="2026-02-03 12:07:11.634015552 +0000 UTC m=+149.157091416" Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.636004 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8vn7s" podStartSLOduration=128.635997319 podStartE2EDuration="2m8.635997319s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:11.633717716 +0000 UTC m=+149.156793580" watchObservedRunningTime="2026-02-03 12:07:11.635997319 +0000 UTC m=+149.159073183" Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.646006 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" event={"ID":"f05b6093-516e-4a63-b2cc-8d6e6e2b2e57","Type":"ContainerStarted","Data":"0bbc0bca5cff8db70c37fc88ccb1862390ca8ce141c55816ca2e857a23c08bb3"} Feb 03 12:07:11 crc kubenswrapper[4820]: I0203 12:07:11.698393 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:11 crc kubenswrapper[4820]: E0203 12:07:11.698752 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:12.198739902 +0000 UTC m=+149.721815766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:11.800343 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7csk9" podStartSLOduration=128.800326731 podStartE2EDuration="2m8.800326731s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:11.799783097 +0000 UTC m=+149.322858961" watchObservedRunningTime="2026-02-03 12:07:11.800326731 +0000 UTC m=+149.323402585" Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:11.803610 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" event={"ID":"d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5","Type":"ContainerStarted","Data":"c1be93fd4db693c5291f0781b450274d43079492ade1e9749d8e2cfb151eb8c4"} Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:11.803936 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:11.804669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:12 crc kubenswrapper[4820]: E0203 12:07:11.808465 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:12.308449784 +0000 UTC m=+149.831525648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:11.808626 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:12 crc kubenswrapper[4820]: E0203 12:07:11.810768 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:12.310761139 +0000 UTC m=+149.833837003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:11.864186 4820 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8rb2x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:11.864228 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" podUID="d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:11.864413 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" event={"ID":"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e","Type":"ContainerStarted","Data":"5a7388e8edbab65f12d970d3a037227056ce428065902d4729915f4a4898299b"} Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:11.865493 4820 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4gskq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:11.865514 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" podUID="0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:11.959338 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:12 crc kubenswrapper[4820]: E0203 12:07:11.959989 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:12.45996165 +0000 UTC m=+149.983037534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.064035 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:12 crc kubenswrapper[4820]: E0203 12:07:12.066421 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:12.566410204 +0000 UTC m=+150.089486068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.112185 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" event={"ID":"4d235feb-2891-4c16-b240-381a5810a0c7","Type":"ContainerStarted","Data":"441a59b7f879fb7f9f60b13364d927bc33c461126bff0edc97530224b68aa825"} Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.112205 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:12 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:12 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:12 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.112269 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.161534 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gqwld" event={"ID":"8237a118-001c-483c-8810-d051f33d35eb","Type":"ContainerStarted","Data":"75134a63a7fdf91eb945161c91154dec60eae90374523369f1c63488c45d37f2"} Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.162351 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" podStartSLOduration=128.162337007 podStartE2EDuration="2m8.162337007s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:12.042758341 +0000 UTC m=+149.565834205" watchObservedRunningTime="2026-02-03 12:07:12.162337007 +0000 UTC m=+149.685412881" Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.176804 4820 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-c7gsf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.176909 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" podUID="52996a75-b03e-40f5-a587-2c1476910cd4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.285697 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:12 crc kubenswrapper[4820]: E0203 12:07:12.286434 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:12.7864088 +0000 UTC m=+150.309484664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.388617 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:12 crc kubenswrapper[4820]: E0203 12:07:12.391709 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:12.891695406 +0000 UTC m=+150.414771270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.491702 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:12 crc kubenswrapper[4820]: E0203 12:07:12.492259 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:12.992225099 +0000 UTC m=+150.515300963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.535968 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w4x94" podStartSLOduration=128.535936939 podStartE2EDuration="2m8.535936939s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:12.161508837 +0000 UTC m=+149.684584691" watchObservedRunningTime="2026-02-03 12:07:12.535936939 +0000 UTC m=+150.059015613" Feb 03 12:07:12 crc kubenswrapper[4820]: E0203 12:07:12.720664 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:13.220635725 +0000 UTC m=+150.743711589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.626443 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.817183 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.817247 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.817330 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.817343 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.832679 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:12 crc kubenswrapper[4820]: E0203 12:07:12.834049 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:13.334026703 +0000 UTC m=+150.857102567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:12 crc kubenswrapper[4820]: I0203 12:07:12.945521 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:12 crc kubenswrapper[4820]: E0203 12:07:12.945917 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:13.445879696 +0000 UTC m=+150.968955560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.097585 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:13.097920 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:13.597899523 +0000 UTC m=+151.120975397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.111320 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:15 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:15 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:15 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.111380 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.324163 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:13.324872 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:13.824852435 +0000 UTC m=+151.347928289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.465886 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:13.466267 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:13.96624751 +0000 UTC m=+151.489323374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.496399 4820 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4gskq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.496443 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" podUID="0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.496635 4820 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8rb2x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.496653 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" podUID="d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.646943 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:13.647368 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:14.147353672 +0000 UTC m=+151.670429546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.648919 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.648951 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.648974 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.648995 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.800766 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gqwld" podStartSLOduration=129.800731771 podStartE2EDuration="2m9.800731771s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:12.535379416 +0000 UTC m=+150.058455300" watchObservedRunningTime="2026-02-03 12:07:13.800731771 +0000 UTC m=+151.323807635" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.802218 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:13.802807 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:14.302788061 +0000 UTC m=+151.825863925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:13.905766 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:13.949502 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:14.449485943 +0000 UTC m=+151.972561807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.119206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:14.119703 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:14.619677773 +0000 UTC m=+152.142753637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.129643 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:15 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:15 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:15 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.129731 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.336228 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:14.336536 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:14.836523784 +0000 UTC m=+152.359599648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.341485 4820 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-b9krf container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.341545 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" podUID="35ef1add-69b2-424c-b5ff-7f18b915eae1" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.341642 4820 patch_prober.go:28] interesting pod/console-f9d7485db-tw2nt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.341662 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tw2nt" podUID="b06753a3-652a-4acc-b294-3ccaa5b0cb99" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.501325 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:14.501524 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:15.00149525 +0000 UTC m=+152.524571124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.501657 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:14.502227 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:15.002216647 +0000 UTC m=+152.525292511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.519377 4820 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8rb2x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.519454 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" podUID="d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.764552 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:14.765107 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:15.265092324 +0000 UTC m=+152.788168188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.765304 4820 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4gskq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.765328 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" podUID="0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.765379 4820 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-c7gsf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.765391 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" podUID="52996a75-b03e-40f5-a587-2c1476910cd4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.765434 4820 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8rb2x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.765446 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" podUID="d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.765497 4820 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8rb2x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.765507 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" podUID="d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.765733 4820 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-c7gsf container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.765769 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" podUID="52996a75-b03e-40f5-a587-2c1476910cd4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:14.918038 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:15.014131 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:15.514113071 +0000 UTC m=+153.037188935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.019269 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:15.019762 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:15.519745925 +0000 UTC m=+153.042821789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.150525 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:15.151710 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:15.651690735 +0000 UTC m=+153.174766599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.171862 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:15 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:15 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:15 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.171924 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:15.390013 4820 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.231s" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.390047 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.390063 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.396095 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:15.396361 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:15.896338928 +0000 UTC m=+153.419414792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.510216 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" event={"ID":"35ef1add-69b2-424c-b5ff-7f18b915eae1","Type":"ContainerStarted","Data":"b32c810ea6a674e142bf432523c7df6327243e70e3d7b1b48118aa8ef5e4dcd7"} Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.510526 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kqqwj" event={"ID":"6aba4201-b6b2-4aed-adeb-513e9190efa9","Type":"ContainerStarted","Data":"d6e6bb07d9874772bb9b8ecc0e8f01cafd74a7431cee57cccba3e3defddbd5b8"} Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.510551 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" event={"ID":"6b522a8e-f795-4cf1-adbb-899674a5e359","Type":"ContainerStarted","Data":"d04ec501a912fa143c94e7a5d00261d29f43ab62c644cafa4de5f75facc4518c"} Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.510578 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.510597 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.510614 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.510651 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.511342 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.511443 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.513691 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:15.522255 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:16.022240335 +0000 UTC m=+153.545316199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.695981 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:15.696071 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:16.196049732 +0000 UTC m=+153.719125596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.697157 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.697253 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c9dee9e-4581-4a11-8f12-27a64b26cbf9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7c9dee9e-4581-4a11-8f12-27a64b26cbf9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.697378 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c9dee9e-4581-4a11-8f12-27a64b26cbf9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7c9dee9e-4581-4a11-8f12-27a64b26cbf9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:15.701582 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:16.201569503 +0000 UTC m=+153.724645367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.821811 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.821860 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.822148 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.822340 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.823759 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.824100 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c9dee9e-4581-4a11-8f12-27a64b26cbf9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7c9dee9e-4581-4a11-8f12-27a64b26cbf9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.824136 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c9dee9e-4581-4a11-8f12-27a64b26cbf9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7c9dee9e-4581-4a11-8f12-27a64b26cbf9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:07:15 crc kubenswrapper[4820]: E0203 12:07:15.824720 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:16.324706084 +0000 UTC m=+153.847781938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.824749 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c9dee9e-4581-4a11-8f12-27a64b26cbf9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7c9dee9e-4581-4a11-8f12-27a64b26cbf9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.825382 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.825402 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.825431 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.825833 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"49a0cd07642e7cb199747437fa1a228fee76d4f3d23348f202c0e4834408659f"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.826009 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" containerID="cri-o://49a0cd07642e7cb199747437fa1a228fee76d4f3d23348f202c0e4834408659f" gracePeriod=30 Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.880440 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 12:07:15 crc kubenswrapper[4820]: I0203 12:07:15.880523 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 12:07:16 crc kubenswrapper[4820]: I0203 12:07:15.986679 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:16 crc kubenswrapper[4820]: E0203 12:07:16.140839 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:16.640803298 +0000 UTC m=+154.163879162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:16 crc kubenswrapper[4820]: I0203 12:07:16.141356 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:16 crc kubenswrapper[4820]: E0203 12:07:16.141833 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:16.641823951 +0000 UTC m=+154.164899815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:16 crc kubenswrapper[4820]: I0203 12:07:16.278437 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:16 crc kubenswrapper[4820]: E0203 12:07:16.278954 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:16.778942375 +0000 UTC m=+154.302018239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:16 crc kubenswrapper[4820]: I0203 12:07:16.283168 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:16 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:16 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:16 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:16 crc kubenswrapper[4820]: I0203 12:07:16.283250 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:16 crc kubenswrapper[4820]: I0203 12:07:16.410394 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:16 crc kubenswrapper[4820]: E0203 12:07:16.414786 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:16.913769205 +0000 UTC m=+154.436845069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:16 crc kubenswrapper[4820]: I0203 12:07:16.416003 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:16 crc kubenswrapper[4820]: E0203 12:07:16.417178 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:16.917164845 +0000 UTC m=+154.440240709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:16 crc kubenswrapper[4820]: I0203 12:07:16.557986 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:16 crc kubenswrapper[4820]: E0203 12:07:16.558563 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:17.058538 +0000 UTC m=+154.581613864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:16 crc kubenswrapper[4820]: I0203 12:07:16.868117 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:17 crc kubenswrapper[4820]: E0203 12:07:16.921947 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:17.421927129 +0000 UTC m=+154.945002993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:17 crc kubenswrapper[4820]: I0203 12:07:17.029072 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:17 crc kubenswrapper[4820]: E0203 12:07:17.029581 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:17.5295502 +0000 UTC m=+155.052626074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:17 crc kubenswrapper[4820]: I0203 12:07:17.144662 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:17 crc kubenswrapper[4820]: E0203 12:07:17.145052 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:17.645039559 +0000 UTC m=+155.168115423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:17 crc kubenswrapper[4820]: I0203 12:07:17.420200 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:17 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:17 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:17 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:17 crc kubenswrapper[4820]: I0203 12:07:17.420245 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:17 crc kubenswrapper[4820]: I0203 12:07:17.420801 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:17 crc kubenswrapper[4820]: E0203 12:07:17.421586 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:17.92156183 +0000 UTC m=+155.444637694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:17 crc kubenswrapper[4820]: I0203 12:07:17.522757 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:17 crc kubenswrapper[4820]: E0203 12:07:17.523129 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:18.023114618 +0000 UTC m=+155.546190482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:17 crc kubenswrapper[4820]: I0203 12:07:17.538338 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c9dee9e-4581-4a11-8f12-27a64b26cbf9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7c9dee9e-4581-4a11-8f12-27a64b26cbf9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:17.625585 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:18 crc kubenswrapper[4820]: E0203 12:07:17.626646 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:18.126621211 +0000 UTC m=+155.649697075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:17.777490 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:18 crc kubenswrapper[4820]: E0203 12:07:17.777871 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:18.277859271 +0000 UTC m=+155.800935135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.181997 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:18 crc kubenswrapper[4820]: E0203 12:07:18.183188 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:18.683169357 +0000 UTC m=+156.206245221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.192430 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:18 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:18 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:18 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.192470 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.312942 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:18 crc kubenswrapper[4820]: E0203 12:07:18.313338 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:18.813322315 +0000 UTC m=+156.336398179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.518837 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:18 crc kubenswrapper[4820]: E0203 12:07:18.518988 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:19.018959379 +0000 UTC m=+156.542035233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.519101 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:18 crc kubenswrapper[4820]: E0203 12:07:18.519459 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:19.019446101 +0000 UTC m=+156.542521965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.620464 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:18 crc kubenswrapper[4820]: E0203 12:07:18.620656 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:19.120624069 +0000 UTC m=+156.643699923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.620847 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:18 crc kubenswrapper[4820]: E0203 12:07:18.621223 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:19.121208433 +0000 UTC m=+156.644284297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.707695 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-lbsmw_c93c42c7-c9ff-42cc-b604-e36f7a063fcf/openshift-config-operator/0.log" Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.708122 4820 generic.go:334] "Generic (PLEG): container finished" podID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerID="49a0cd07642e7cb199747437fa1a228fee76d4f3d23348f202c0e4834408659f" exitCode=255 Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.708165 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" event={"ID":"c93c42c7-c9ff-42cc-b604-e36f7a063fcf","Type":"ContainerDied","Data":"49a0cd07642e7cb199747437fa1a228fee76d4f3d23348f202c0e4834408659f"} Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.709467 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vmzb5" event={"ID":"d79a4fe5-aca8-4046-8760-2892f6e3dc7d","Type":"ContainerStarted","Data":"ff727d3dde0c6b4aa157f0466857901938e5095891f06d51fcd54236f31f50a1"} Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.710966 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk" event={"ID":"3677a387-0bac-4cea-9921-c93b14cd430e","Type":"ContainerStarted","Data":"a2635f0e31eaf64abee0644e6cd04cf1ee974798faeb763697f9fd70a6c29a66"} Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.712098 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf" event={"ID":"b7587d06-772a-477c-9503-4af59b74f082","Type":"ContainerStarted","Data":"c2348f8627dc343dffd5e038301f8129f7741db372f433b280a20a103818ba53"} Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.714587 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" event={"ID":"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052","Type":"ContainerStarted","Data":"1c8af6acb1ba361fd50a6315cdb5d368f1be8f55f938ae30d672bd00dc98ca9d"} Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.716042 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" event={"ID":"92dde085-8a2b-4c9f-947f-441ea67b8622","Type":"ContainerStarted","Data":"67b885316ec9f5e784fc1adc076ae1f874aad7366377cb7270df56b6acafe0e1"} Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.716283 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.729369 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:18 crc kubenswrapper[4820]: E0203 12:07:18.729507 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:19.229489631 +0000 UTC m=+156.752565495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.730106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:18 crc kubenswrapper[4820]: E0203 12:07:18.732806 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:19.232795839 +0000 UTC m=+156.755871713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.781547 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9w662 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.781601 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.813840 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.813923 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 12:07:18 crc kubenswrapper[4820]: I0203 12:07:18.831417 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:18 crc kubenswrapper[4820]: E0203 12:07:18.831963 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:19.331935549 +0000 UTC m=+156.855011463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:18.936193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:19 crc kubenswrapper[4820]: E0203 12:07:18.936535 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:19.436523038 +0000 UTC m=+156.959598902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.138308 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:19 crc kubenswrapper[4820]: E0203 12:07:19.138907 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:19.638875504 +0000 UTC m=+157.161951368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.149454 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:19 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:19 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:19 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.149504 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.199126 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.259396 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:19 crc kubenswrapper[4820]: E0203 12:07:19.259840 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:19.759826572 +0000 UTC m=+157.282902446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.302062 4820 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-b9krf container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.302422 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" podUID="35ef1add-69b2-424c-b5ff-7f18b915eae1" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.408103 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:19 crc kubenswrapper[4820]: E0203 12:07:19.408386 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:19.908360508 +0000 UTC m=+157.431436372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.509750 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:19 crc kubenswrapper[4820]: E0203 12:07:19.510304 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:20.010289774 +0000 UTC m=+157.533365628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.576467 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.577487 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.608694 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.620396 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.620657 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d434cee-7d3a-4492-b5f9-071b2527ac8a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9d434cee-7d3a-4492-b5f9-071b2527ac8a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.620690 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d434cee-7d3a-4492-b5f9-071b2527ac8a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9d434cee-7d3a-4492-b5f9-071b2527ac8a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:19 crc kubenswrapper[4820]: E0203 12:07:19.620978 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:20.120937267 +0000 UTC m=+157.644013131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.726172 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d434cee-7d3a-4492-b5f9-071b2527ac8a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9d434cee-7d3a-4492-b5f9-071b2527ac8a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.726222 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d434cee-7d3a-4492-b5f9-071b2527ac8a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9d434cee-7d3a-4492-b5f9-071b2527ac8a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.726283 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.726556 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d434cee-7d3a-4492-b5f9-071b2527ac8a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9d434cee-7d3a-4492-b5f9-071b2527ac8a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:19 crc kubenswrapper[4820]: E0203 12:07:19.726720 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:20.226703805 +0000 UTC m=+157.749779669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.894476 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:19 crc kubenswrapper[4820]: E0203 12:07:19.894801 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:20.394777545 +0000 UTC m=+157.917853409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.894850 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:19 crc kubenswrapper[4820]: E0203 12:07:19.895226 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:20.395209045 +0000 UTC m=+157.918284909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.919271 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 03 12:07:19 crc kubenswrapper[4820]: I0203 12:07:19.920721 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.000047 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:20 crc kubenswrapper[4820]: E0203 12:07:20.000511 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:20.50047376 +0000 UTC m=+158.023553704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.064558 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-526s7" event={"ID":"49dae199-6b32-4904-862e-aff8cb8c4946","Type":"ContainerStarted","Data":"f0fd63151413e1352259ef47bacdb4126e08e0001c667af47eca9601a62d71d9"} Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.088412 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9w662 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.088467 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.099156 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:20 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:20 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:20 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.099236 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.101255 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.103124 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d434cee-7d3a-4492-b5f9-071b2527ac8a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9d434cee-7d3a-4492-b5f9-071b2527ac8a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:20 crc kubenswrapper[4820]: E0203 12:07:20.103945 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:20.603929263 +0000 UTC m=+158.127005197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:20 crc kubenswrapper[4820]: W0203 12:07:20.166180 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-d4461dfb4b2c63f2886415b482753ced0a9fc26795d9a9335d1feeff9b74261b WatchSource:0}: Error finding container d4461dfb4b2c63f2886415b482753ced0a9fc26795d9a9335d1feeff9b74261b: Status 404 returned error can't find the container with id d4461dfb4b2c63f2886415b482753ced0a9fc26795d9a9335d1feeff9b74261b Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.171931 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.254226 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:20 crc kubenswrapper[4820]: E0203 12:07:20.254537 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:20.754495307 +0000 UTC m=+158.277571181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.254707 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:20 crc kubenswrapper[4820]: E0203 12:07:20.255319 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:20.755303366 +0000 UTC m=+158.278379230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.356567 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:20 crc kubenswrapper[4820]: E0203 12:07:20.357023 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:20.857004177 +0000 UTC m=+158.380080041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.467260 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:20 crc kubenswrapper[4820]: E0203 12:07:20.467726 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:20.967713592 +0000 UTC m=+158.490789456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.638427 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:20 crc kubenswrapper[4820]: E0203 12:07:20.638732 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:21.138718061 +0000 UTC m=+158.661793925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.835794 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:20 crc kubenswrapper[4820]: E0203 12:07:20.836495 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:21.336484068 +0000 UTC m=+158.859559952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:20 crc kubenswrapper[4820]: W0203 12:07:20.925993 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-b498f745949fa95a951915ccbad9ae991015999492ef094ed47ddd24acf911cc WatchSource:0}: Error finding container b498f745949fa95a951915ccbad9ae991015999492ef094ed47ddd24acf911cc: Status 404 returned error can't find the container with id b498f745949fa95a951915ccbad9ae991015999492ef094ed47ddd24acf911cc Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.971056 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:20 crc kubenswrapper[4820]: E0203 12:07:20.971196 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:21.471166475 +0000 UTC m=+158.994242339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:20 crc kubenswrapper[4820]: I0203 12:07:20.971524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:20 crc kubenswrapper[4820]: E0203 12:07:20.971971 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:21.471953643 +0000 UTC m=+158.995029507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.334518 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:21 crc kubenswrapper[4820]: E0203 12:07:21.335268 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:21.8352489 +0000 UTC m=+159.358324764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.379098 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:21 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:21 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:21 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.379145 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.506812 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:21 crc kubenswrapper[4820]: E0203 12:07:21.507287 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:22.007268843 +0000 UTC m=+159.530344707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.607676 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:21 crc kubenswrapper[4820]: E0203 12:07:21.608022 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:22.108003921 +0000 UTC m=+159.631079785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.622751 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" event={"ID":"51e0f75d-f0eb-4e04-a1ef-da8f256a845d","Type":"ContainerStarted","Data":"729df0d8a27d05f106b2c33372602d5b4e9c4d438e532b2adf07bc6c1084ab4a"} Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.622798 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vdn7t" event={"ID":"1b5d200f-9fc0-42f9-96e3-ebec60c47a05","Type":"ContainerStarted","Data":"13fd90f5e57c7b3840c5c0c4872cbaa9fdd005f3898c900bbbe94a968405afbe"} Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.628104 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" event={"ID":"4acfe638-6e10-4d68-9cfa-3d1e1d4c1052","Type":"ContainerStarted","Data":"fba97b64b951a50b025534185df280175f9453f3cff7d8e90805027851a74a2b"} Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.719534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:21 crc kubenswrapper[4820]: E0203 12:07:21.719871 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:22.219860444 +0000 UTC m=+159.742936308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.780705 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" event={"ID":"b9d628ea-493d-4b0c-b4a2-194cef62a08e","Type":"ContainerStarted","Data":"5ac08a8a154c895b89f3ef82fe7d81c2f3220d2db7f95ad75058c6645d9c383f"} Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.785969 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" event={"ID":"b460558b-ba3e-4543-bb57-debddb0711e7","Type":"ContainerStarted","Data":"242162253c2d0868d26ceafb54debba0ae74ddc730a13280436dd737d1d8e2d6"} Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.791012 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b498f745949fa95a951915ccbad9ae991015999492ef094ed47ddd24acf911cc"} Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.795392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"45af824d296f88b3ef7efa5a11425f8721e774e76b9ce4b1bf271ec9e06cb764"} Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.799232 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-lbsmw_c93c42c7-c9ff-42cc-b604-e36f7a063fcf/openshift-config-operator/0.log" Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.799508 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" event={"ID":"c93c42c7-c9ff-42cc-b604-e36f7a063fcf","Type":"ContainerStarted","Data":"fb244ac86e32f2b530c3242096449b7c2ff02c0518322550f6cfe6f844d2b605"} Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.804279 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.821173 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:21 crc kubenswrapper[4820]: E0203 12:07:21.823827 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:22.323778208 +0000 UTC m=+159.846854072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.825983 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:21 crc kubenswrapper[4820]: E0203 12:07:21.830130 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:22.330087607 +0000 UTC m=+159.853163471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.833877 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d4461dfb4b2c63f2886415b482753ced0a9fc26795d9a9335d1feeff9b74261b"} Feb 03 12:07:21 crc kubenswrapper[4820]: I0203 12:07:21.929487 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:21 crc kubenswrapper[4820]: E0203 12:07:21.931366 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:22.431325047 +0000 UTC m=+159.954400911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.128326 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wfbd9" event={"ID":"d4ff9542-77a8-4e10-b4a0-8ab831c57b35","Type":"ContainerStarted","Data":"91b909975de0f0228fde8d7f40d745da559da74309d91a0a611bf9d526ccb336"} Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.129101 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:22 crc kubenswrapper[4820]: E0203 12:07:22.129411 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:22.629396651 +0000 UTC m=+160.152472515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.132287 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" event={"ID":"577cff0c-0386-467f-8a44-314a922051e2","Type":"ContainerStarted","Data":"ddbadfe3869e289adc1032a6960d452b9436f5f2a80c51ff0bc5da30a6cfe151"} Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.198201 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" event={"ID":"18148611-705a-4276-a9e3-9659f38654a8","Type":"ContainerStarted","Data":"5d3da13df6987c8bc3d5004c3a5be366005a6dc86d572ef8ac64576ff235c3eb"} Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.210842 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:22 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:22 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:22 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.210913 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.230103 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:22 crc kubenswrapper[4820]: E0203 12:07:22.230550 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:22.730532048 +0000 UTC m=+160.253607912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.302425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" event={"ID":"31277b5e-7869-4612-ba40-dcd0a37153fb","Type":"ContainerStarted","Data":"cf3bf2f747c78bef7cebad0b839e71a24557cbd0beebf0570c7cbd9ac3bd9f13"} Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.302747 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.366955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:22 crc kubenswrapper[4820]: E0203 12:07:22.368721 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:22.868707327 +0000 UTC m=+160.391783191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.431586 4820 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b6ghj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.431649 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" podUID="31277b5e-7869-4612-ba40-dcd0a37153fb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.472806 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:22 crc kubenswrapper[4820]: E0203 12:07:22.473197 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:22.973178543 +0000 UTC m=+160.496254407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.579569 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:22 crc kubenswrapper[4820]: E0203 12:07:22.580303 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:23.080264452 +0000 UTC m=+160.603340316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.680245 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:22 crc kubenswrapper[4820]: E0203 12:07:22.680406 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:23.180361334 +0000 UTC m=+160.703437198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.680759 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:22 crc kubenswrapper[4820]: E0203 12:07:22.681702 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:23.181690166 +0000 UTC m=+160.704766020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:22 crc kubenswrapper[4820]: I0203 12:07:22.809732 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:22 crc kubenswrapper[4820]: E0203 12:07:22.811172 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:23.311130167 +0000 UTC m=+160.834206031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.217122 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:23 crc kubenswrapper[4820]: E0203 12:07:23.217473 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:23.717461688 +0000 UTC m=+161.240537552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.249553 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:23 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:23 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:23 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.249634 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.426161 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:23 crc kubenswrapper[4820]: E0203 12:07:23.426463 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:23.926446432 +0000 UTC m=+161.449522296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.534631 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:23 crc kubenswrapper[4820]: E0203 12:07:23.535027 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:24.035014526 +0000 UTC m=+161.558090390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.587240 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ddcbffcf4c5db521da2cc74ef6b0e60651040858cf6f65f6ff118debde673a94"} Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.588112 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.615177 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.615495 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.615596 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.615658 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.622210 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kqqwj" event={"ID":"6aba4201-b6b2-4aed-adeb-513e9190efa9","Type":"ContainerStarted","Data":"00c437412d83917f80d713581f770a7fb7fde7910edc2a3c23fc479a5b4b60b1"} Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.638100 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:23 crc kubenswrapper[4820]: E0203 12:07:23.638444 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:24.138427448 +0000 UTC m=+161.661503322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.675674 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.675741 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.675760 4820 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b6ghj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.675808 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" podUID="31277b5e-7869-4612-ba40-dcd0a37153fb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 03 12:07:23 crc kubenswrapper[4820]: I0203 12:07:23.927160 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:23 crc kubenswrapper[4820]: E0203 12:07:23.927612 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:24.42759688 +0000 UTC m=+161.950672744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.035595 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:24 crc kubenswrapper[4820]: E0203 12:07:24.037052 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:24.537030354 +0000 UTC m=+162.060106238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.247147 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:24 crc kubenswrapper[4820]: E0203 12:07:24.247579 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:24.747558626 +0000 UTC m=+162.270634490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.342066 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" podStartSLOduration=141.342039444 podStartE2EDuration="2m21.342039444s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:23.638649933 +0000 UTC m=+161.161725797" watchObservedRunningTime="2026-02-03 12:07:24.342039444 +0000 UTC m=+161.865115328" Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.342789 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" podStartSLOduration=141.342780972 podStartE2EDuration="2m21.342780972s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:24.245753952 +0000 UTC m=+161.768829816" watchObservedRunningTime="2026-02-03 12:07:24.342780972 +0000 UTC m=+161.865856836" Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.345510 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.345794 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.364750 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:24 crc kubenswrapper[4820]: E0203 12:07:24.370088 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:24.870055451 +0000 UTC m=+162.393131315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.370358 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:24 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:24 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:24 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.370434 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:24 crc kubenswrapper[4820]: E0203 12:07:24.371250 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:24.871229919 +0000 UTC m=+162.394305783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.370913 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.379310 4820 patch_prober.go:28] interesting pod/console-f9d7485db-tw2nt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.379367 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tw2nt" podUID="b06753a3-652a-4acc-b294-3ccaa5b0cb99" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.476230 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:24 crc kubenswrapper[4820]: E0203 12:07:24.477077 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:24.977045328 +0000 UTC m=+162.500121192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.579298 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.745795 4820 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z7vmj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.27:8443/livez\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.745858 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" podUID="4acfe638-6e10-4d68-9cfa-3d1e1d4c1052" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.27:8443/livez\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.746723 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9w662 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.746773 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.747086 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9w662 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.747151 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.763383 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.883870 4820 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b6ghj container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.883947 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" podUID="31277b5e-7869-4612-ba40-dcd0a37153fb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.884067 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.884084 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.884151 4820 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b6ghj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.884168 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" podUID="31277b5e-7869-4612-ba40-dcd0a37153fb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.884205 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 12:07:24 crc kubenswrapper[4820]: I0203 12:07:24.884293 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 12:07:25 crc kubenswrapper[4820]: E0203 12:07:25.193020 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:25.079869605 +0000 UTC m=+162.602945479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.193143 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:25 crc kubenswrapper[4820]: E0203 12:07:25.194266 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:26.194248918 +0000 UTC m=+163.717324792 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.262006 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" podStartSLOduration=141.261976049 podStartE2EDuration="2m21.261976049s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:24.887237931 +0000 UTC m=+162.410313795" watchObservedRunningTime="2026-02-03 12:07:25.261976049 +0000 UTC m=+162.785051923" Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.271077 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:25 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:25 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:25 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.271151 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.294602 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg"] Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.294815 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" podUID="05797a22-690b-4b36-8b4e-5dcc739f7cad" containerName="route-controller-manager" containerID="cri-o://8bf43d2afcda5b91937865aa4106f9fd21e0f58f105c00dd9695023a0e8ea599" gracePeriod=30 Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.295183 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:25 crc kubenswrapper[4820]: E0203 12:07:25.295538 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:25.795524508 +0000 UTC m=+163.318600372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.485507 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:25 crc kubenswrapper[4820]: E0203 12:07:25.486161 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:25.986148705 +0000 UTC m=+163.509224579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.614435 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxjbf" podStartSLOduration=141.614414328 podStartE2EDuration="2m21.614414328s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:25.293279065 +0000 UTC m=+162.816354929" watchObservedRunningTime="2026-02-03 12:07:25.614414328 +0000 UTC m=+163.137490202" Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.614989 4820 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4gskq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.615068 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" podUID="0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.615155 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5kfzf" podStartSLOduration=141.615147645 podStartE2EDuration="2m21.615147645s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:25.614792057 +0000 UTC m=+163.137867941" watchObservedRunningTime="2026-02-03 12:07:25.615147645 +0000 UTC m=+163.138223509" Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.615860 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:25 crc kubenswrapper[4820]: E0203 12:07:25.616233 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:26.116214351 +0000 UTC m=+163.639290215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.651915 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"73b098018d55accc1d6a53bc8bc736bc5b897f8cb5e22baa712f0ceb13cfc881"} Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.674773 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wfbd9" event={"ID":"d4ff9542-77a8-4e10-b4a0-8ab831c57b35","Type":"ContainerStarted","Data":"416edd92167872c78f19bff94b4ee3522e15c5ae020b0bd5149dde5b24efee9b"} Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.675646 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wfbd9" Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.676964 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drcgk" podStartSLOduration=141.676952867 podStartE2EDuration="2m21.676952867s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:25.676011034 +0000 UTC m=+163.199086898" watchObservedRunningTime="2026-02-03 12:07:25.676952867 +0000 UTC m=+163.200028731" Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.699439 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.719580 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:25 crc kubenswrapper[4820]: E0203 12:07:25.720861 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:26.220848941 +0000 UTC m=+163.743924805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.740508 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" event={"ID":"577cff0c-0386-467f-8a44-314a922051e2","Type":"ContainerStarted","Data":"f47f508fb7a73a097c73c17613f3e37ed0dab7b51b62b7d89198c2d6b5e3aa7e"} Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.741121 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.753649 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kqqwj" podStartSLOduration=141.753624701 podStartE2EDuration="2m21.753624701s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:25.739779651 +0000 UTC m=+163.262855525" watchObservedRunningTime="2026-02-03 12:07:25.753624701 +0000 UTC m=+163.276700565" Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.767748 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"216467e5d6a31b5ebd31e6245f4dd35f057816dff95c13a0b2cf1d03eade9860"} Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.769761 4820 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b6ghj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.769835 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" podUID="31277b5e-7869-4612-ba40-dcd0a37153fb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.892187 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:25 crc kubenswrapper[4820]: E0203 12:07:25.893502 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:26.3934818 +0000 UTC m=+163.916557664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.902536 4820 csr.go:261] certificate signing request csr-pkd78 is approved, waiting to be issued Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.995211 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:25 crc kubenswrapper[4820]: I0203 12:07:25.995266 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs\") pod \"network-metrics-daemon-7vz6k\" (UID: \"6351e457-e601-4889-853c-560646bc4b43\") " pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:07:25 crc kubenswrapper[4820]: E0203 12:07:25.996827 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:26.496811539 +0000 UTC m=+164.019887403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.022541 4820 csr.go:257] certificate signing request csr-pkd78 is issued Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.077984 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6351e457-e601-4889-853c-560646bc4b43-metrics-certs\") pod \"network-metrics-daemon-7vz6k\" (UID: \"6351e457-e601-4889-853c-560646bc4b43\") " pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.095058 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-526s7" podStartSLOduration=25.095040187 podStartE2EDuration="25.095040187s" podCreationTimestamp="2026-02-03 12:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:26.09392972 +0000 UTC m=+163.617005604" watchObservedRunningTime="2026-02-03 12:07:26.095040187 +0000 UTC m=+163.618116051" Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.095976 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:26 crc kubenswrapper[4820]: E0203 12:07:26.096225 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:26.596209215 +0000 UTC m=+164.119285079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.098595 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:26 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:26 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:26 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.098841 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.129215 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-58ssn" podStartSLOduration=142.12919423 podStartE2EDuration="2m22.12919423s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:26.129129778 +0000 UTC m=+163.652205642" watchObservedRunningTime="2026-02-03 12:07:26.12919423 +0000 UTC m=+163.652270114" Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.214112 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:26 crc kubenswrapper[4820]: E0203 12:07:26.214978 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:26.714952311 +0000 UTC m=+164.238028175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.256643 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7vz6k" Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.316879 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:26 crc kubenswrapper[4820]: E0203 12:07:26.317646 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:26.817619755 +0000 UTC m=+164.340695629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.373022 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" podStartSLOduration=142.372997913 podStartE2EDuration="2m22.372997913s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:26.288620944 +0000 UTC m=+163.811696828" watchObservedRunningTime="2026-02-03 12:07:26.372997913 +0000 UTC m=+163.896073777" Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.373878 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.431206 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:26 crc kubenswrapper[4820]: E0203 12:07:26.431729 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:26.93170658 +0000 UTC m=+164.454782444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.578216 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:26 crc kubenswrapper[4820]: E0203 12:07:26.596193 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:27.096160614 +0000 UTC m=+164.619236478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.631637 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vmzb5" podStartSLOduration=142.631623768 podStartE2EDuration="2m22.631623768s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:26.438481141 +0000 UTC m=+163.961557015" watchObservedRunningTime="2026-02-03 12:07:26.631623768 +0000 UTC m=+164.154699632" Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.681571 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:26 crc kubenswrapper[4820]: E0203 12:07:26.682205 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:27.182193751 +0000 UTC m=+164.705269615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.982337 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:26 crc kubenswrapper[4820]: I0203 12:07:26.997167 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:27 crc kubenswrapper[4820]: I0203 12:07:27.000611 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:27 crc kubenswrapper[4820]: E0203 12:07:26.999265 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:27.499240188 +0000 UTC m=+165.022316052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:27 crc kubenswrapper[4820]: I0203 12:07:27.000935 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:27 crc kubenswrapper[4820]: E0203 12:07:27.001508 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:27.501490232 +0000 UTC m=+165.024566096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:27 crc kubenswrapper[4820]: I0203 12:07:27.024706 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-03 12:02:25 +0000 UTC, rotation deadline is 2026-11-28 15:41:04.813396687 +0000 UTC Feb 03 12:07:27 crc kubenswrapper[4820]: I0203 12:07:27.024931 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7155h33m37.788469458s for next certificate rotation Feb 03 12:07:27 crc kubenswrapper[4820]: I0203 12:07:27.047945 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vdn7t" podStartSLOduration=143.047923307 podStartE2EDuration="2m23.047923307s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:27.045092539 +0000 UTC m=+164.568168423" watchObservedRunningTime="2026-02-03 12:07:27.047923307 +0000 UTC m=+164.570999171" Feb 03 12:07:27 crc kubenswrapper[4820]: I0203 12:07:27.106767 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:27 crc kubenswrapper[4820]: E0203 12:07:27.107277 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:27.607263209 +0000 UTC m=+165.130339073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:27 crc kubenswrapper[4820]: I0203 12:07:27.126243 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:27 crc kubenswrapper[4820]: I0203 12:07:27.146718 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:27 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:27 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:27 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:27 crc kubenswrapper[4820]: I0203 12:07:27.146807 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:27 crc kubenswrapper[4820]: I0203 12:07:27.215460 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:27 crc kubenswrapper[4820]: E0203 12:07:27.218231 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:27.718195859 +0000 UTC m=+165.241271723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:27 crc kubenswrapper[4820]: I0203 12:07:27.638510 4820 generic.go:334] "Generic (PLEG): container finished" podID="05797a22-690b-4b36-8b4e-5dcc739f7cad" containerID="8bf43d2afcda5b91937865aa4106f9fd21e0f58f105c00dd9695023a0e8ea599" exitCode=0 Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.639558 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:27.640518 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:28.14049692 +0000 UTC m=+165.663572794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.640633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:27.641009 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:28.140999592 +0000 UTC m=+165.664075456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.739788 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" podStartSLOduration=143.739763403 podStartE2EDuration="2m23.739763403s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:27.73840369 +0000 UTC m=+165.261479574" watchObservedRunningTime="2026-02-03 12:07:27.739763403 +0000 UTC m=+165.262839267" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.741599 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:27.741848 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:28.241798761 +0000 UTC m=+165.764874625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.806172 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b9krf" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.806202 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.806223 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5vfzj"] Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.807259 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5vfzj"] Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.807276 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" event={"ID":"05797a22-690b-4b36-8b4e-5dcc739f7cad","Type":"ContainerDied","Data":"8bf43d2afcda5b91937865aa4106f9fd21e0f58f105c00dd9695023a0e8ea599"} Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.807308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7c9dee9e-4581-4a11-8f12-27a64b26cbf9","Type":"ContainerStarted","Data":"5942e09bee3a7311aed8ffbde906beddf250f29a63a2b956818f777b1a247d01"} Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.807383 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.830326 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dt8ch"] Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.840452 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.846719 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.847908 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:27.848336 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:28.348306636 +0000 UTC m=+165.871382500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.987838 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.988076 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829fef9f-938d-4d61-9584-bf061063c952-catalog-content\") pod \"community-operators-5vfzj\" (UID: \"829fef9f-938d-4d61-9584-bf061063c952\") " pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.988117 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/682f83dc-ba7f-474f-89d2-6effbcf2806b-utilities\") pod \"certified-operators-dt8ch\" (UID: \"682f83dc-ba7f-474f-89d2-6effbcf2806b\") " pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.988172 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6rdz\" (UniqueName: \"kubernetes.io/projected/682f83dc-ba7f-474f-89d2-6effbcf2806b-kube-api-access-k6rdz\") pod \"certified-operators-dt8ch\" (UID: \"682f83dc-ba7f-474f-89d2-6effbcf2806b\") " pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.988188 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcfzd\" (UniqueName: \"kubernetes.io/projected/829fef9f-938d-4d61-9584-bf061063c952-kube-api-access-mcfzd\") pod \"community-operators-5vfzj\" (UID: \"829fef9f-938d-4d61-9584-bf061063c952\") " pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.988207 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829fef9f-938d-4d61-9584-bf061063c952-utilities\") pod \"community-operators-5vfzj\" (UID: \"829fef9f-938d-4d61-9584-bf061063c952\") " pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:27.988256 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/682f83dc-ba7f-474f-89d2-6effbcf2806b-catalog-content\") pod \"certified-operators-dt8ch\" (UID: \"682f83dc-ba7f-474f-89d2-6effbcf2806b\") " pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:27.989108 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:28.489082307 +0000 UTC m=+166.012158221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.004367 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.021122 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dt8ch"] Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.094061 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6rdz\" (UniqueName: \"kubernetes.io/projected/682f83dc-ba7f-474f-89d2-6effbcf2806b-kube-api-access-k6rdz\") pod \"certified-operators-dt8ch\" (UID: \"682f83dc-ba7f-474f-89d2-6effbcf2806b\") " pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.094117 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcfzd\" (UniqueName: \"kubernetes.io/projected/829fef9f-938d-4d61-9584-bf061063c952-kube-api-access-mcfzd\") pod \"community-operators-5vfzj\" (UID: \"829fef9f-938d-4d61-9584-bf061063c952\") " pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.094157 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829fef9f-938d-4d61-9584-bf061063c952-utilities\") pod \"community-operators-5vfzj\" (UID: \"829fef9f-938d-4d61-9584-bf061063c952\") " pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.094230 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/682f83dc-ba7f-474f-89d2-6effbcf2806b-catalog-content\") pod \"certified-operators-dt8ch\" (UID: \"682f83dc-ba7f-474f-89d2-6effbcf2806b\") " pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.094284 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829fef9f-938d-4d61-9584-bf061063c952-catalog-content\") pod \"community-operators-5vfzj\" (UID: \"829fef9f-938d-4d61-9584-bf061063c952\") " pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.094321 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.094351 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/682f83dc-ba7f-474f-89d2-6effbcf2806b-utilities\") pod \"certified-operators-dt8ch\" (UID: \"682f83dc-ba7f-474f-89d2-6effbcf2806b\") " pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.110394 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/682f83dc-ba7f-474f-89d2-6effbcf2806b-utilities\") pod \"certified-operators-dt8ch\" (UID: \"682f83dc-ba7f-474f-89d2-6effbcf2806b\") " pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.110827 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/682f83dc-ba7f-474f-89d2-6effbcf2806b-catalog-content\") pod \"certified-operators-dt8ch\" (UID: \"682f83dc-ba7f-474f-89d2-6effbcf2806b\") " pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:28.112211 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:28.612192247 +0000 UTC m=+166.135268181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.117390 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:30 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:30 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:30 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.117433 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.124324 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829fef9f-938d-4d61-9584-bf061063c952-catalog-content\") pod \"community-operators-5vfzj\" (UID: \"829fef9f-938d-4d61-9584-bf061063c952\") " pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.126299 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829fef9f-938d-4d61-9584-bf061063c952-utilities\") pod \"community-operators-5vfzj\" (UID: \"829fef9f-938d-4d61-9584-bf061063c952\") " pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.138456 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5xqrp"] Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.139599 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.147032 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcfzd\" (UniqueName: \"kubernetes.io/projected/829fef9f-938d-4d61-9584-bf061063c952-kube-api-access-mcfzd\") pod \"community-operators-5vfzj\" (UID: \"829fef9f-938d-4d61-9584-bf061063c952\") " pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.195830 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.196198 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d510f8-dde9-46b4-965e-9d2726b5f0d7-utilities\") pod \"community-operators-5xqrp\" (UID: \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\") " pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.196326 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frk8k\" (UniqueName: \"kubernetes.io/projected/38d510f8-dde9-46b4-965e-9d2726b5f0d7-kube-api-access-frk8k\") pod \"community-operators-5xqrp\" (UID: \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\") " pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.196377 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d510f8-dde9-46b4-965e-9d2726b5f0d7-catalog-content\") pod \"community-operators-5xqrp\" (UID: \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\") " pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:28.196555 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:28.696530744 +0000 UTC m=+166.219606608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.435435 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frk8k\" (UniqueName: \"kubernetes.io/projected/38d510f8-dde9-46b4-965e-9d2726b5f0d7-kube-api-access-frk8k\") pod \"community-operators-5xqrp\" (UID: \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\") " pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.435477 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d510f8-dde9-46b4-965e-9d2726b5f0d7-catalog-content\") pod \"community-operators-5xqrp\" (UID: \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\") " pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.435544 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.435596 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d510f8-dde9-46b4-965e-9d2726b5f0d7-utilities\") pod \"community-operators-5xqrp\" (UID: \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\") " pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.436252 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d510f8-dde9-46b4-965e-9d2726b5f0d7-utilities\") pod \"community-operators-5xqrp\" (UID: \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\") " pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.438924 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d510f8-dde9-46b4-965e-9d2726b5f0d7-catalog-content\") pod \"community-operators-5xqrp\" (UID: \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\") " pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:28.439435 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:28.939418026 +0000 UTC m=+166.462493960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.444854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6rdz\" (UniqueName: \"kubernetes.io/projected/682f83dc-ba7f-474f-89d2-6effbcf2806b-kube-api-access-k6rdz\") pod \"certified-operators-dt8ch\" (UID: \"682f83dc-ba7f-474f-89d2-6effbcf2806b\") " pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.445523 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.445841 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5xqrp"] Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.467131 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ntpgz"] Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.470388 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.609960 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:28.610330 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:29.110313253 +0000 UTC m=+166.633389117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.730086 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.733154 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-544nm\" (UniqueName: \"kubernetes.io/projected/f2daa931-03c0-484d-9ea2-a30607c5f034-kube-api-access-544nm\") pod \"certified-operators-ntpgz\" (UID: \"f2daa931-03c0-484d-9ea2-a30607c5f034\") " pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.733258 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2daa931-03c0-484d-9ea2-a30607c5f034-catalog-content\") pod \"certified-operators-ntpgz\" (UID: \"f2daa931-03c0-484d-9ea2-a30607c5f034\") " pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.733297 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2daa931-03c0-484d-9ea2-a30607c5f034-utilities\") pod \"certified-operators-ntpgz\" (UID: \"f2daa931-03c0-484d-9ea2-a30607c5f034\") " pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:28.760698 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:29.260671131 +0000 UTC m=+166.783746995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.760851 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ntpgz"] Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.793139 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frk8k\" (UniqueName: \"kubernetes.io/projected/38d510f8-dde9-46b4-965e-9d2726b5f0d7-kube-api-access-frk8k\") pod \"community-operators-5xqrp\" (UID: \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\") " pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.809211 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9d434cee-7d3a-4492-b5f9-071b2527ac8a","Type":"ContainerStarted","Data":"61e7c93e442de2d50a26a54c1875564dc3e8b8df6a9c0f8e6f3be1aa24a6ba9c"} Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.824286 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" event={"ID":"b460558b-ba3e-4543-bb57-debddb0711e7","Type":"ContainerStarted","Data":"400d217dd0fd1c64e6399ebcca09c0142e34ee7628a7f5ffb66e33aaf7a85941"} Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.837311 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.837521 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-544nm\" (UniqueName: \"kubernetes.io/projected/f2daa931-03c0-484d-9ea2-a30607c5f034-kube-api-access-544nm\") pod \"certified-operators-ntpgz\" (UID: \"f2daa931-03c0-484d-9ea2-a30607c5f034\") " pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.837563 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2daa931-03c0-484d-9ea2-a30607c5f034-catalog-content\") pod \"certified-operators-ntpgz\" (UID: \"f2daa931-03c0-484d-9ea2-a30607c5f034\") " pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.837591 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2daa931-03c0-484d-9ea2-a30607c5f034-utilities\") pod \"certified-operators-ntpgz\" (UID: \"f2daa931-03c0-484d-9ea2-a30607c5f034\") " pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.838265 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2daa931-03c0-484d-9ea2-a30607c5f034-utilities\") pod \"certified-operators-ntpgz\" (UID: \"f2daa931-03c0-484d-9ea2-a30607c5f034\") " pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:28.838328 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:29.338315919 +0000 UTC m=+166.861391783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:28.838772 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2daa931-03c0-484d-9ea2-a30607c5f034-catalog-content\") pod \"certified-operators-ntpgz\" (UID: \"f2daa931-03c0-484d-9ea2-a30607c5f034\") " pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:29.224182 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:29.224600 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:29.724587302 +0000 UTC m=+167.247663166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:29.225180 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:30 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:30 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:30 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:29.225203 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:29.236494 4820 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z7vmj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.27:8443/livez\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:29.236557 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" podUID="4acfe638-6e10-4d68-9cfa-3d1e1d4c1052" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.27:8443/livez\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:29.309362 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-544nm\" (UniqueName: \"kubernetes.io/projected/f2daa931-03c0-484d-9ea2-a30607c5f034-kube-api-access-544nm\") pod \"certified-operators-ntpgz\" (UID: \"f2daa931-03c0-484d-9ea2-a30607c5f034\") " pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:29.327522 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:29.328468 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:29.828452554 +0000 UTC m=+167.351528418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:29.555432 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:29.781461 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:30.281441526 +0000 UTC m=+167.804517390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:29.921574 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:29.921751 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:30.421733395 +0000 UTC m=+167.944809249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:29.922237 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:29.922819 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:30.422791091 +0000 UTC m=+167.945866965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.023790 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:30.024004 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:30.523979499 +0000 UTC m=+168.047055363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.024118 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:30.024544 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:30.524526762 +0000 UTC m=+168.047602696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.163149 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:30.163575 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:30.663544911 +0000 UTC m=+168.186620765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.163681 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:30.164036 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:30.664025412 +0000 UTC m=+168.187101276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.166732 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:30 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:30 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:30 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.166770 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.298904 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:30.299687 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:30.79966218 +0000 UTC m=+168.322738054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.304700 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:30.306144 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:30.806123744 +0000 UTC m=+168.329199608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.358401 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wfbd9" podStartSLOduration=29.358379198 podStartE2EDuration="29.358379198s" podCreationTimestamp="2026-02-03 12:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:28.836277421 +0000 UTC m=+166.359353275" watchObservedRunningTime="2026-02-03 12:07:30.358379198 +0000 UTC m=+167.881455072" Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:30.380483 4820 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.157s" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.380716 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dvpt2"] Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.384794 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.511904 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.514661 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:30 crc kubenswrapper[4820]: E0203 12:07:30.515069 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:31.015049247 +0000 UTC m=+168.538125121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.519710 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvpt2"] Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.523020 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bl6zg"] Feb 03 12:07:30 crc kubenswrapper[4820]: I0203 12:07:30.524511 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.157614 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.157771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fdd485f-526a-4367-ba6d-b68246ed45a0-utilities\") pod \"redhat-marketplace-dvpt2\" (UID: \"6fdd485f-526a-4367-ba6d-b68246ed45a0\") " pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.157847 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fdd485f-526a-4367-ba6d-b68246ed45a0-catalog-content\") pod \"redhat-marketplace-dvpt2\" (UID: \"6fdd485f-526a-4367-ba6d-b68246ed45a0\") " pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.157873 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb8lb\" (UniqueName: \"kubernetes.io/projected/6fdd485f-526a-4367-ba6d-b68246ed45a0-kube-api-access-kb8lb\") pod \"redhat-marketplace-dvpt2\" (UID: \"6fdd485f-526a-4367-ba6d-b68246ed45a0\") " pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.157929 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-catalog-content\") pod \"redhat-marketplace-bl6zg\" (UID: \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\") " pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.157969 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-utilities\") pod \"redhat-marketplace-bl6zg\" (UID: \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\") " pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.158004 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl98q\" (UniqueName: \"kubernetes.io/projected/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-kube-api-access-fl98q\") pod \"redhat-marketplace-bl6zg\" (UID: \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\") " pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:07:31 crc kubenswrapper[4820]: E0203 12:07:31.158712 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:32.158694976 +0000 UTC m=+169.681770840 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.188359 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bl6zg"] Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.199518 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:31 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:31 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:31 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.201617 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.212291 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.238299 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.259097 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-utilities\") pod \"redhat-marketplace-bl6zg\" (UID: \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\") " pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.259142 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl98q\" (UniqueName: \"kubernetes.io/projected/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-kube-api-access-fl98q\") pod \"redhat-marketplace-bl6zg\" (UID: \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\") " pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.259187 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fdd485f-526a-4367-ba6d-b68246ed45a0-utilities\") pod \"redhat-marketplace-dvpt2\" (UID: \"6fdd485f-526a-4367-ba6d-b68246ed45a0\") " pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.259245 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.259289 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fdd485f-526a-4367-ba6d-b68246ed45a0-catalog-content\") pod \"redhat-marketplace-dvpt2\" (UID: \"6fdd485f-526a-4367-ba6d-b68246ed45a0\") " pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.259311 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb8lb\" (UniqueName: \"kubernetes.io/projected/6fdd485f-526a-4367-ba6d-b68246ed45a0-kube-api-access-kb8lb\") pod \"redhat-marketplace-dvpt2\" (UID: \"6fdd485f-526a-4367-ba6d-b68246ed45a0\") " pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.259346 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-catalog-content\") pod \"redhat-marketplace-bl6zg\" (UID: \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\") " pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.260624 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-utilities\") pod \"redhat-marketplace-bl6zg\" (UID: \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\") " pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.261996 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fdd485f-526a-4367-ba6d-b68246ed45a0-utilities\") pod \"redhat-marketplace-dvpt2\" (UID: \"6fdd485f-526a-4367-ba6d-b68246ed45a0\") " pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:07:31 crc kubenswrapper[4820]: E0203 12:07:31.262325 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:31.762304762 +0000 UTC m=+169.285380626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.262593 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fdd485f-526a-4367-ba6d-b68246ed45a0-catalog-content\") pod \"redhat-marketplace-dvpt2\" (UID: \"6fdd485f-526a-4367-ba6d-b68246ed45a0\") " pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.263182 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-catalog-content\") pod \"redhat-marketplace-bl6zg\" (UID: \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\") " pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:07:31 crc kubenswrapper[4820]: E0203 12:07:31.655573 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:32.155546372 +0000 UTC m=+169.678622246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.659661 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.659711 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.419502 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.664911 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:31 crc kubenswrapper[4820]: E0203 12:07:31.665328 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:32.165316985 +0000 UTC m=+169.688392849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.777342 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:31 crc kubenswrapper[4820]: E0203 12:07:31.777801 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:32.277767371 +0000 UTC m=+169.800843245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.786823 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.805414 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.834108 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl98q\" (UniqueName: \"kubernetes.io/projected/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-kube-api-access-fl98q\") pod \"redhat-marketplace-bl6zg\" (UID: \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\") " pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.848802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb8lb\" (UniqueName: \"kubernetes.io/projected/6fdd485f-526a-4367-ba6d-b68246ed45a0-kube-api-access-kb8lb\") pod \"redhat-marketplace-dvpt2\" (UID: \"6fdd485f-526a-4367-ba6d-b68246ed45a0\") " pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.874918 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.886015 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:31 crc kubenswrapper[4820]: E0203 12:07:31.886758 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:32.386744305 +0000 UTC m=+169.909820169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.901841 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.974003 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wfbd9" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.975104 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" podStartSLOduration=147.975089527 podStartE2EDuration="2m27.975089527s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:31.974220566 +0000 UTC m=+169.497296420" watchObservedRunningTime="2026-02-03 12:07:31.975089527 +0000 UTC m=+169.498165381" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.982207 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zrlrv"] Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.983483 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.984612 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pl5wr"] Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.986531 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.987027 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:31 crc kubenswrapper[4820]: E0203 12:07:31.987449 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:32.487427641 +0000 UTC m=+170.010503515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.987517 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:31 crc kubenswrapper[4820]: E0203 12:07:31.987854 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:32.487845451 +0000 UTC m=+170.010921325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.991587 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrlrv"] Feb 03 12:07:31 crc kubenswrapper[4820]: I0203 12:07:31.991713 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:07:32 crc kubenswrapper[4820]: I0203 12:07:32.777531 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7vz6k"] Feb 03 12:07:32 crc kubenswrapper[4820]: I0203 12:07:32.782275 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:32 crc kubenswrapper[4820]: I0203 12:07:32.782583 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2341b8b4-d207-4c89-8e46-a1b6b787afc8-catalog-content\") pod \"redhat-operators-pl5wr\" (UID: \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\") " pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:07:32 crc kubenswrapper[4820]: I0203 12:07:32.782690 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030d5842-d0b7-4e4f-ad63-58848630a1ca-catalog-content\") pod \"redhat-operators-zrlrv\" (UID: \"030d5842-d0b7-4e4f-ad63-58848630a1ca\") " pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:07:32 crc kubenswrapper[4820]: I0203 12:07:32.782716 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2psp\" (UniqueName: \"kubernetes.io/projected/030d5842-d0b7-4e4f-ad63-58848630a1ca-kube-api-access-q2psp\") pod \"redhat-operators-zrlrv\" (UID: \"030d5842-d0b7-4e4f-ad63-58848630a1ca\") " pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:07:32 crc kubenswrapper[4820]: I0203 12:07:32.782757 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2341b8b4-d207-4c89-8e46-a1b6b787afc8-utilities\") pod \"redhat-operators-pl5wr\" (UID: \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\") " pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:07:32 crc kubenswrapper[4820]: I0203 12:07:32.782785 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mwf5\" (UniqueName: \"kubernetes.io/projected/2341b8b4-d207-4c89-8e46-a1b6b787afc8-kube-api-access-6mwf5\") pod \"redhat-operators-pl5wr\" (UID: \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\") " pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:07:32 crc kubenswrapper[4820]: I0203 12:07:32.782834 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030d5842-d0b7-4e4f-ad63-58848630a1ca-utilities\") pod \"redhat-operators-zrlrv\" (UID: \"030d5842-d0b7-4e4f-ad63-58848630a1ca\") " pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:07:32 crc kubenswrapper[4820]: E0203 12:07:32.783115 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:33.783094438 +0000 UTC m=+171.306170302 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:32 crc kubenswrapper[4820]: I0203 12:07:32.984600 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:33 crc kubenswrapper[4820]: E0203 12:07:33.171777 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:33.671724267 +0000 UTC m=+171.194800131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.174213 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2341b8b4-d207-4c89-8e46-a1b6b787afc8-catalog-content\") pod \"redhat-operators-pl5wr\" (UID: \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\") " pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.190783 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:33 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:33 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:33 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.190834 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:32.984962 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2341b8b4-d207-4c89-8e46-a1b6b787afc8-catalog-content\") pod \"redhat-operators-pl5wr\" (UID: \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\") " pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.214711 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030d5842-d0b7-4e4f-ad63-58848630a1ca-catalog-content\") pod \"redhat-operators-zrlrv\" (UID: \"030d5842-d0b7-4e4f-ad63-58848630a1ca\") " pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.214742 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2psp\" (UniqueName: \"kubernetes.io/projected/030d5842-d0b7-4e4f-ad63-58848630a1ca-kube-api-access-q2psp\") pod \"redhat-operators-zrlrv\" (UID: \"030d5842-d0b7-4e4f-ad63-58848630a1ca\") " pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.214804 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2341b8b4-d207-4c89-8e46-a1b6b787afc8-utilities\") pod \"redhat-operators-pl5wr\" (UID: \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\") " pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.214828 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mwf5\" (UniqueName: \"kubernetes.io/projected/2341b8b4-d207-4c89-8e46-a1b6b787afc8-kube-api-access-6mwf5\") pod \"redhat-operators-pl5wr\" (UID: \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\") " pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.214921 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030d5842-d0b7-4e4f-ad63-58848630a1ca-utilities\") pod \"redhat-operators-zrlrv\" (UID: \"030d5842-d0b7-4e4f-ad63-58848630a1ca\") " pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.214957 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:33 crc kubenswrapper[4820]: E0203 12:07:33.215279 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:33.715267025 +0000 UTC m=+171.238342889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.215619 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030d5842-d0b7-4e4f-ad63-58848630a1ca-catalog-content\") pod \"redhat-operators-zrlrv\" (UID: \"030d5842-d0b7-4e4f-ad63-58848630a1ca\") " pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.216073 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2341b8b4-d207-4c89-8e46-a1b6b787afc8-utilities\") pod \"redhat-operators-pl5wr\" (UID: \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\") " pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.216622 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030d5842-d0b7-4e4f-ad63-58848630a1ca-utilities\") pod \"redhat-operators-zrlrv\" (UID: \"030d5842-d0b7-4e4f-ad63-58848630a1ca\") " pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.228503 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:33 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:33 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:33 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.228587 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.334594 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:33 crc kubenswrapper[4820]: E0203 12:07:33.335128 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:33.835109856 +0000 UTC m=+171.358185720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.556758 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:33 crc kubenswrapper[4820]: E0203 12:07:33.558451 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:34.058438662 +0000 UTC m=+171.581514526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.647141 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.647230 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.647396 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.647439 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.655710 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2psp\" (UniqueName: \"kubernetes.io/projected/030d5842-d0b7-4e4f-ad63-58848630a1ca-kube-api-access-q2psp\") pod \"redhat-operators-zrlrv\" (UID: \"030d5842-d0b7-4e4f-ad63-58848630a1ca\") " pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.690360 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.724382 4820 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.733292 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.733341 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pl5wr"] Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.734024 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"6c4488126847dd3de8d2a9eb16836456561cd827e52a95954ae608cf60a52482"} pod="openshift-console/downloads-7954f5f757-lnc22" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.734051 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" containerID="cri-o://6c4488126847dd3de8d2a9eb16836456561cd827e52a95954ae608cf60a52482" gracePeriod=2 Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.744255 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.744299 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.758650 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:33 crc kubenswrapper[4820]: E0203 12:07:33.759087 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:34.259064857 +0000 UTC m=+171.782140741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:33 crc kubenswrapper[4820]: I0203 12:07:33.941469 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:33 crc kubenswrapper[4820]: E0203 12:07:33.941849 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:34.441834967 +0000 UTC m=+171.964910831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.139352 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:34 crc kubenswrapper[4820]: E0203 12:07:34.140222 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:34.640204818 +0000 UTC m=+172.163280682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.274301 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:34 crc kubenswrapper[4820]: E0203 12:07:34.274659 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:34.774645268 +0000 UTC m=+172.297721132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.276368 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mwf5\" (UniqueName: \"kubernetes.io/projected/2341b8b4-d207-4c89-8e46-a1b6b787afc8-kube-api-access-6mwf5\") pod \"redhat-operators-pl5wr\" (UID: \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\") " pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.276857 4820 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-03T12:07:33.724409722Z","Handler":null,"Name":""} Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.280967 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:34 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:34 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:34 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.281051 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.753420 4820 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cs8dg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.753466 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" podUID="05797a22-690b-4b36-8b4e-5dcc739f7cad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.755829 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:34 crc kubenswrapper[4820]: E0203 12:07:34.756157 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:35.256142568 +0000 UTC m=+172.779218432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.757594 4820 patch_prober.go:28] interesting pod/console-f9d7485db-tw2nt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.757792 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tw2nt" podUID="b06753a3-652a-4acc-b294-3ccaa5b0cb99" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.757967 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.886958 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:34 crc kubenswrapper[4820]: E0203 12:07:34.888578 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:35.388566471 +0000 UTC m=+172.911642335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qpxpv" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.931148 4820 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.931204 4820 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 03 12:07:34 crc kubenswrapper[4820]: I0203 12:07:34.939722 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:07:35 crc kubenswrapper[4820]: I0203 12:07:35.047113 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:35 crc kubenswrapper[4820]: I0203 12:07:35.128792 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:35 crc kubenswrapper[4820]: I0203 12:07:35.128841 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:35 crc kubenswrapper[4820]: I0203 12:07:35.128912 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:35 crc kubenswrapper[4820]: I0203 12:07:35.128926 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:35 crc kubenswrapper[4820]: I0203 12:07:35.132725 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:35 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:35 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:35 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:35 crc kubenswrapper[4820]: I0203 12:07:35.132768 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:35 crc kubenswrapper[4820]: I0203 12:07:35.137005 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-k7tp7" podStartSLOduration=152.136980463 podStartE2EDuration="2m32.136980463s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:35.132537727 +0000 UTC m=+172.655613611" watchObservedRunningTime="2026-02-03 12:07:35.136980463 +0000 UTC m=+172.660056327" Feb 03 12:07:35 crc kubenswrapper[4820]: I0203 12:07:35.764554 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 12:07:35 crc kubenswrapper[4820]: I0203 12:07:35.764927 4820 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b6ghj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:35 crc kubenswrapper[4820]: I0203 12:07:35.765036 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" podUID="31277b5e-7869-4612-ba40-dcd0a37153fb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:35.911656 4820 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b6ghj container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:35.911781 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" podUID="31277b5e-7869-4612-ba40-dcd0a37153fb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.054608 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.222065 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.309767 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.309803 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.361573 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.363787 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 03 12:07:36 crc kubenswrapper[4820]: E0203 12:07:36.364517 4820 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.222s" Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.364566 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lt75x"] Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.364788 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" podUID="1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8" containerName="controller-manager" containerID="cri-o://f1a30bc906bf3cbb26a87046812f2acf49af38fa613535b15d6e11fd8304f36e" gracePeriod=30 Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.393433 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:36 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:36 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:36 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.393487 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.403480 4820 generic.go:334] "Generic (PLEG): container finished" podID="b9d628ea-493d-4b0c-b4a2-194cef62a08e" containerID="5ac08a8a154c895b89f3ef82fe7d81c2f3220d2db7f95ad75058c6645d9c383f" exitCode=0 Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.403540 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" event={"ID":"b9d628ea-493d-4b0c-b4a2-194cef62a08e","Type":"ContainerDied","Data":"5ac08a8a154c895b89f3ef82fe7d81c2f3220d2db7f95ad75058c6645d9c383f"} Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.588032 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05797a22-690b-4b36-8b4e-5dcc739f7cad-serving-cert\") pod \"05797a22-690b-4b36-8b4e-5dcc739f7cad\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.588179 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05797a22-690b-4b36-8b4e-5dcc739f7cad-config\") pod \"05797a22-690b-4b36-8b4e-5dcc739f7cad\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.588373 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05797a22-690b-4b36-8b4e-5dcc739f7cad-client-ca\") pod \"05797a22-690b-4b36-8b4e-5dcc739f7cad\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.588497 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnw29\" (UniqueName: \"kubernetes.io/projected/05797a22-690b-4b36-8b4e-5dcc739f7cad-kube-api-access-gnw29\") pod \"05797a22-690b-4b36-8b4e-5dcc739f7cad\" (UID: \"05797a22-690b-4b36-8b4e-5dcc739f7cad\") " Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.593585 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05797a22-690b-4b36-8b4e-5dcc739f7cad-config" (OuterVolumeSpecName: "config") pod "05797a22-690b-4b36-8b4e-5dcc739f7cad" (UID: "05797a22-690b-4b36-8b4e-5dcc739f7cad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.594584 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05797a22-690b-4b36-8b4e-5dcc739f7cad-client-ca" (OuterVolumeSpecName: "client-ca") pod "05797a22-690b-4b36-8b4e-5dcc739f7cad" (UID: "05797a22-690b-4b36-8b4e-5dcc739f7cad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.600355 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" event={"ID":"b460558b-ba3e-4543-bb57-debddb0711e7","Type":"ContainerStarted","Data":"44791f0784808e851cd771cc8d4597fd905c9935769f218338e9d79ebb838628"} Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.607080 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" event={"ID":"05797a22-690b-4b36-8b4e-5dcc739f7cad","Type":"ContainerDied","Data":"7182fedced2758ecd7ecb9e7da64193d528b8761d3f257c54f2fdc1bd7f1fb6d"} Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.607423 4820 scope.go:117] "RemoveContainer" containerID="8bf43d2afcda5b91937865aa4106f9fd21e0f58f105c00dd9695023a0e8ea599" Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.607636 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.692741 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05797a22-690b-4b36-8b4e-5dcc739f7cad-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:36 crc kubenswrapper[4820]: I0203 12:07:36.692786 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05797a22-690b-4b36-8b4e-5dcc739f7cad-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:37 crc kubenswrapper[4820]: I0203 12:07:37.019937 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05797a22-690b-4b36-8b4e-5dcc739f7cad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "05797a22-690b-4b36-8b4e-5dcc739f7cad" (UID: "05797a22-690b-4b36-8b4e-5dcc739f7cad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:07:37 crc kubenswrapper[4820]: I0203 12:07:37.042754 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05797a22-690b-4b36-8b4e-5dcc739f7cad-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:37 crc kubenswrapper[4820]: I0203 12:07:37.138132 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qpxpv\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:37 crc kubenswrapper[4820]: I0203 12:07:37.175881 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05797a22-690b-4b36-8b4e-5dcc739f7cad-kube-api-access-gnw29" (OuterVolumeSpecName: "kube-api-access-gnw29") pod "05797a22-690b-4b36-8b4e-5dcc739f7cad" (UID: "05797a22-690b-4b36-8b4e-5dcc739f7cad"). InnerVolumeSpecName "kube-api-access-gnw29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:07:37 crc kubenswrapper[4820]: I0203 12:07:37.925366 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:37 crc kubenswrapper[4820]: I0203 12:07:37.937272 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnw29\" (UniqueName: \"kubernetes.io/projected/05797a22-690b-4b36-8b4e-5dcc739f7cad-kube-api-access-gnw29\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:37 crc kubenswrapper[4820]: I0203 12:07:37.956129 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:37 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:37 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:37 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:37 crc kubenswrapper[4820]: I0203 12:07:37.956199 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:38 crc kubenswrapper[4820]: I0203 12:07:38.018852 4820 generic.go:334] "Generic (PLEG): container finished" podID="1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8" containerID="f1a30bc906bf3cbb26a87046812f2acf49af38fa613535b15d6e11fd8304f36e" exitCode=0 Feb 03 12:07:38 crc kubenswrapper[4820]: I0203 12:07:38.018967 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" event={"ID":"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8","Type":"ContainerDied","Data":"f1a30bc906bf3cbb26a87046812f2acf49af38fa613535b15d6e11fd8304f36e"} Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:38.274004 4820 generic.go:334] "Generic (PLEG): container finished" podID="876c5dc3-b775-45cc-94b6-4339735e9975" containerID="6c4488126847dd3de8d2a9eb16836456561cd827e52a95954ae608cf60a52482" exitCode=0 Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:38.274061 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lnc22" event={"ID":"876c5dc3-b775-45cc-94b6-4339735e9975","Type":"ContainerDied","Data":"6c4488126847dd3de8d2a9eb16836456561cd827e52a95954ae608cf60a52482"} Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:38.289100 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:39 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:39 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:39 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:38.289147 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:38.309218 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7c9dee9e-4581-4a11-8f12-27a64b26cbf9","Type":"ContainerStarted","Data":"02788b1b10c21af4388272e63b0cdfbaede38bad546c400cb9cac405fdb8aadc"} Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:38.376225 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" event={"ID":"6351e457-e601-4889-853c-560646bc4b43","Type":"ContainerStarted","Data":"02f2b20cae5451defe6446a89f1d385ebced48be14f31ef59fce17ae325b8b15"} Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.101317 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:39 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:39 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:39 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.101548 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.271662 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=26.27163718 podStartE2EDuration="26.27163718s" podCreationTimestamp="2026-02-03 12:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:39.243166603 +0000 UTC m=+176.766242477" watchObservedRunningTime="2026-02-03 12:07:39.27163718 +0000 UTC m=+176.794713054" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.382441 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7"] Feb 03 12:07:39 crc kubenswrapper[4820]: E0203 12:07:39.382672 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05797a22-690b-4b36-8b4e-5dcc739f7cad" containerName="route-controller-manager" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.382684 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="05797a22-690b-4b36-8b4e-5dcc739f7cad" containerName="route-controller-manager" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.382840 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="05797a22-690b-4b36-8b4e-5dcc739f7cad" containerName="route-controller-manager" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.383258 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7"] Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.383277 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ntpgz"] Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.383294 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5xqrp"] Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.383303 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5vfzj"] Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.383390 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.390725 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4859338-3c97-453e-8ef8-db6f786c1172-config\") pod \"route-controller-manager-5df5f5bb-qqlt7\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.390758 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4859338-3c97-453e-8ef8-db6f786c1172-serving-cert\") pod \"route-controller-manager-5df5f5bb-qqlt7\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.390808 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4859338-3c97-453e-8ef8-db6f786c1172-client-ca\") pod \"route-controller-manager-5df5f5bb-qqlt7\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.390876 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4tvb\" (UniqueName: \"kubernetes.io/projected/c4859338-3c97-453e-8ef8-db6f786c1172-kube-api-access-g4tvb\") pod \"route-controller-manager-5df5f5bb-qqlt7\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.392294 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.393938 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.394132 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.394442 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.394542 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.394659 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.441160 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dt8ch"] Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.461178 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vfzj" event={"ID":"829fef9f-938d-4d61-9584-bf061063c952","Type":"ContainerStarted","Data":"38dccf4ebc2636cc29ae4e2a18f71dd56137163ecf92d1e6d034a31a54c75c28"} Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.476030 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpgz" event={"ID":"f2daa931-03c0-484d-9ea2-a30607c5f034","Type":"ContainerStarted","Data":"c49b3739b18f0b02c507c5a3fb43a838e2d5a165338a7f5ee1a9cdeaf074f967"} Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.489602 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xqrp" event={"ID":"38d510f8-dde9-46b4-965e-9d2726b5f0d7","Type":"ContainerStarted","Data":"7e80acca71139daf1031234ae3056308522e872ea642047f40c8f7346edbdfe9"} Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.492023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4859338-3c97-453e-8ef8-db6f786c1172-client-ca\") pod \"route-controller-manager-5df5f5bb-qqlt7\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.492066 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4tvb\" (UniqueName: \"kubernetes.io/projected/c4859338-3c97-453e-8ef8-db6f786c1172-kube-api-access-g4tvb\") pod \"route-controller-manager-5df5f5bb-qqlt7\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.492121 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4859338-3c97-453e-8ef8-db6f786c1172-config\") pod \"route-controller-manager-5df5f5bb-qqlt7\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.492145 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4859338-3c97-453e-8ef8-db6f786c1172-serving-cert\") pod \"route-controller-manager-5df5f5bb-qqlt7\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.494001 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4859338-3c97-453e-8ef8-db6f786c1172-client-ca\") pod \"route-controller-manager-5df5f5bb-qqlt7\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.495008 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4859338-3c97-453e-8ef8-db6f786c1172-config\") pod \"route-controller-manager-5df5f5bb-qqlt7\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.499976 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4859338-3c97-453e-8ef8-db6f786c1172-serving-cert\") pod \"route-controller-manager-5df5f5bb-qqlt7\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.526153 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4tvb\" (UniqueName: \"kubernetes.io/projected/c4859338-3c97-453e-8ef8-db6f786c1172-kube-api-access-g4tvb\") pod \"route-controller-manager-5df5f5bb-qqlt7\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.579958 4820 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z7vmj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]log ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]etcd ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/generic-apiserver-start-informers ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/max-in-flight-filter ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 03 12:07:39 crc kubenswrapper[4820]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 03 12:07:39 crc kubenswrapper[4820]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/project.openshift.io-projectcache ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-startinformers ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 03 12:07:39 crc kubenswrapper[4820]: livez check failed Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.580708 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" podUID="4acfe638-6e10-4d68-9cfa-3d1e1d4c1052" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.659490 4820 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z7vmj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]log ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]etcd ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/generic-apiserver-start-informers ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/max-in-flight-filter ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 03 12:07:39 crc kubenswrapper[4820]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 03 12:07:39 crc kubenswrapper[4820]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/project.openshift.io-projectcache ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-startinformers ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 03 12:07:39 crc kubenswrapper[4820]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 03 12:07:39 crc kubenswrapper[4820]: livez check failed Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.659571 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" podUID="4acfe638-6e10-4d68-9cfa-3d1e1d4c1052" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.749640 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bl6zg"] Feb 03 12:07:39 crc kubenswrapper[4820]: W0203 12:07:39.762117 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef96ca29_ba6e_42c7_b992_898fb5f7f7b5.slice/crio-54c5703031482d6ab3ce8b6e59eb10dfc6364bf867895619c7939d7dd4a8c250 WatchSource:0}: Error finding container 54c5703031482d6ab3ce8b6e59eb10dfc6364bf867895619c7939d7dd4a8c250: Status 404 returned error can't find the container with id 54c5703031482d6ab3ce8b6e59eb10dfc6364bf867895619c7939d7dd4a8c250 Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.763766 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrlrv"] Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.810794 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qpxpv"] Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.908405 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvpt2"] Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.932932 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.948721 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pl5wr"] Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.970607 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:39 crc kubenswrapper[4820]: I0203 12:07:39.998064 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.000019 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52dr7\" (UniqueName: \"kubernetes.io/projected/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-kube-api-access-52dr7\") pod \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.000071 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-config\") pod \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.000158 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-serving-cert\") pod \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.000232 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-proxy-ca-bundles\") pod \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.000270 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-client-ca\") pod \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\" (UID: \"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8\") " Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.003699 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-config" (OuterVolumeSpecName: "config") pod "1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8" (UID: "1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.006441 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-client-ca" (OuterVolumeSpecName: "client-ca") pod "1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8" (UID: "1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.008365 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8" (UID: "1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:07:40 crc kubenswrapper[4820]: W0203 12:07:40.044274 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2341b8b4_d207_4c89_8e46_a1b6b787afc8.slice/crio-69a9975695d4fde9342cf018a36d3e58ec9cccf10e5dd957ea23268c964cd6ab WatchSource:0}: Error finding container 69a9975695d4fde9342cf018a36d3e58ec9cccf10e5dd957ea23268c964cd6ab: Status 404 returned error can't find the container with id 69a9975695d4fde9342cf018a36d3e58ec9cccf10e5dd957ea23268c964cd6ab Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.044561 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8" (UID: "1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.046869 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-kube-api-access-52dr7" (OuterVolumeSpecName: "kube-api-access-52dr7") pod "1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8" (UID: "1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8"). InnerVolumeSpecName "kube-api-access-52dr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.097430 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:40 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:40 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:40 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.097521 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.101288 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d628ea-493d-4b0c-b4a2-194cef62a08e-config-volume\") pod \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\" (UID: \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\") " Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.101374 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d628ea-493d-4b0c-b4a2-194cef62a08e-secret-volume\") pod \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\" (UID: \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\") " Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.101420 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8mv8\" (UniqueName: \"kubernetes.io/projected/b9d628ea-493d-4b0c-b4a2-194cef62a08e-kube-api-access-l8mv8\") pod \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\" (UID: \"b9d628ea-493d-4b0c-b4a2-194cef62a08e\") " Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.101688 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.101709 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.101718 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52dr7\" (UniqueName: \"kubernetes.io/projected/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-kube-api-access-52dr7\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.101729 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.101739 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.102147 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9d628ea-493d-4b0c-b4a2-194cef62a08e-config-volume" (OuterVolumeSpecName: "config-volume") pod "b9d628ea-493d-4b0c-b4a2-194cef62a08e" (UID: "b9d628ea-493d-4b0c-b4a2-194cef62a08e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.202742 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d628ea-493d-4b0c-b4a2-194cef62a08e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.227077 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9d628ea-493d-4b0c-b4a2-194cef62a08e-kube-api-access-l8mv8" (OuterVolumeSpecName: "kube-api-access-l8mv8") pod "b9d628ea-493d-4b0c-b4a2-194cef62a08e" (UID: "b9d628ea-493d-4b0c-b4a2-194cef62a08e"). InnerVolumeSpecName "kube-api-access-l8mv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.240266 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d628ea-493d-4b0c-b4a2-194cef62a08e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b9d628ea-493d-4b0c-b4a2-194cef62a08e" (UID: "b9d628ea-493d-4b0c-b4a2-194cef62a08e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.304241 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d628ea-493d-4b0c-b4a2-194cef62a08e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.304293 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8mv8\" (UniqueName: \"kubernetes.io/projected/b9d628ea-493d-4b0c-b4a2-194cef62a08e-kube-api-access-l8mv8\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.508992 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" event={"ID":"10fa9c2b-e370-400e-9e71-a4617592b411","Type":"ContainerStarted","Data":"af10c8cddd8a400daea58e6865ad1efa261abface89491db9bc4d9877f70eb27"} Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.511698 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dt8ch" event={"ID":"682f83dc-ba7f-474f-89d2-6effbcf2806b","Type":"ContainerStarted","Data":"bd95936fafbe8cf887bbb9830a4eda6a1883bf43050fffa7f5caf5768449204b"} Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.517263 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lnc22" event={"ID":"876c5dc3-b775-45cc-94b6-4339735e9975","Type":"ContainerStarted","Data":"8db834c2526dc6ff63acff418e5cb17e8ce94b387b7a719475355b6d34bfc1d1"} Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.518114 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.521428 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bl6zg" event={"ID":"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5","Type":"ContainerStarted","Data":"54c5703031482d6ab3ce8b6e59eb10dfc6364bf867895619c7939d7dd4a8c250"} Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.521634 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.521674 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.526872 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" event={"ID":"b9d628ea-493d-4b0c-b4a2-194cef62a08e","Type":"ContainerDied","Data":"59d4d969c3658fe59babb4548184a05b5bd3a27ff5982adc2781c89fb0dbeb94"} Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.526927 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59d4d969c3658fe59babb4548184a05b5bd3a27ff5982adc2781c89fb0dbeb94" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.527269 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.540159 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrlrv" event={"ID":"030d5842-d0b7-4e4f-ad63-58848630a1ca","Type":"ContainerStarted","Data":"dd359c458be9926dd64a069d1121401a3026fe5f28e3a5c126f9e5685dd8a4b6"} Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.541328 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl5wr" event={"ID":"2341b8b4-d207-4c89-8e46-a1b6b787afc8","Type":"ContainerStarted","Data":"69a9975695d4fde9342cf018a36d3e58ec9cccf10e5dd957ea23268c964cd6ab"} Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.542677 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9d434cee-7d3a-4492-b5f9-071b2527ac8a","Type":"ContainerStarted","Data":"89810e7e5b99ab09919284c9822f2e69a69617417d68c312df4d87d7b73acca7"} Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.544566 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpt2" event={"ID":"6fdd485f-526a-4367-ba6d-b68246ed45a0","Type":"ContainerStarted","Data":"bf16f19f08ceb1ca1518d245994ff951a0e49f37e97fc6fd0a5bb7bda1d4e464"} Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.549321 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" event={"ID":"1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8","Type":"ContainerDied","Data":"8d4de06c3e43869e3232a1e8d0fccdea2526e5ba77d0841dd667f4fa563cd00b"} Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.549390 4820 scope.go:117] "RemoveContainer" containerID="f1a30bc906bf3cbb26a87046812f2acf49af38fa613535b15d6e11fd8304f36e" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.549587 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lt75x" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.564186 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=21.564151482 podStartE2EDuration="21.564151482s" podCreationTimestamp="2026-02-03 12:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:40.561655363 +0000 UTC m=+178.084731227" watchObservedRunningTime="2026-02-03 12:07:40.564151482 +0000 UTC m=+178.087227346" Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.941271 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lt75x"] Feb 03 12:07:40 crc kubenswrapper[4820]: I0203 12:07:40.957409 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lt75x"] Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.096250 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:41 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:41 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:41 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.096601 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.153856 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8" path="/var/lib/kubelet/pods/1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8/volumes" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.241433 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bccb6449b-qsmgh"] Feb 03 12:07:41 crc kubenswrapper[4820]: E0203 12:07:41.241903 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d628ea-493d-4b0c-b4a2-194cef62a08e" containerName="collect-profiles" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.241923 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d628ea-493d-4b0c-b4a2-194cef62a08e" containerName="collect-profiles" Feb 03 12:07:41 crc kubenswrapper[4820]: E0203 12:07:41.241942 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8" containerName="controller-manager" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.241949 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8" containerName="controller-manager" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.242103 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9d628ea-493d-4b0c-b4a2-194cef62a08e" containerName="collect-profiles" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.242118 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd0f80e-f7d5-4cd7-a0de-f1b95c827af8" containerName="controller-manager" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.242574 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.247999 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.248208 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.248392 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.249020 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.252706 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.253017 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.263741 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bccb6449b-qsmgh"] Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.265515 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.514663 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-client-ca\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.514717 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnjlw\" (UniqueName: \"kubernetes.io/projected/f24f6a36-6778-4419-b51e-2e127ffa351a-kube-api-access-jnjlw\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.514796 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f24f6a36-6778-4419-b51e-2e127ffa351a-serving-cert\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.514826 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-proxy-ca-bundles\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.514855 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-config\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.623816 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f24f6a36-6778-4419-b51e-2e127ffa351a-serving-cert\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.623945 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-proxy-ca-bundles\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.623984 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-config\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.624048 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-client-ca\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.624096 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnjlw\" (UniqueName: \"kubernetes.io/projected/f24f6a36-6778-4419-b51e-2e127ffa351a-kube-api-access-jnjlw\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.639547 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-config\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.651070 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-proxy-ca-bundles\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.653623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-client-ca\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.682525 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f24f6a36-6778-4419-b51e-2e127ffa351a-serving-cert\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.701123 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnjlw\" (UniqueName: \"kubernetes.io/projected/f24f6a36-6778-4419-b51e-2e127ffa351a-kube-api-access-jnjlw\") pod \"controller-manager-5bccb6449b-qsmgh\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:41 crc kubenswrapper[4820]: I0203 12:07:41.901589 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.036060 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrlrv" event={"ID":"030d5842-d0b7-4e4f-ad63-58848630a1ca","Type":"ContainerStarted","Data":"f1f07f57affb00faa6b0fdf3f3962aad642e5b75c0bf957cc1da28152c8fbf2b"} Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.048109 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" event={"ID":"b460558b-ba3e-4543-bb57-debddb0711e7","Type":"ContainerStarted","Data":"2853ec0f619109c9ea13d13c223f18cdd9de295265df74ebafcbacf076bbf31a"} Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.052479 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xqrp" event={"ID":"38d510f8-dde9-46b4-965e-9d2726b5f0d7","Type":"ContainerStarted","Data":"3986a59082f562ad33e23e77b2b3defb1c3848dd961c1961387c69070fce690e"} Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.058381 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.064372 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vfzj" event={"ID":"829fef9f-938d-4d61-9584-bf061063c952","Type":"ContainerStarted","Data":"3b100c1b0f5145a074505dbf8afd2a2cea65699c06772d0ff0c8b909c797f3f7"} Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.072181 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bl6zg" event={"ID":"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5","Type":"ContainerStarted","Data":"a94a9983f026ff38b4043d5ab9d2c5ce7684277805e9b23d468e852d606b7983"} Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.079396 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" event={"ID":"10fa9c2b-e370-400e-9e71-a4617592b411","Type":"ContainerStarted","Data":"9746ebaefe29531eb0bdf937c38bb73eaac7c8db11c2bb0ea0458959f35747ca"} Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.079675 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.080758 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dt8ch" event={"ID":"682f83dc-ba7f-474f-89d2-6effbcf2806b","Type":"ContainerStarted","Data":"5a2d00a8406b2f79c8b348d38148cdc25692c7c9d425d6f81bbcddd0551302fc"} Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.084348 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpgz" event={"ID":"f2daa931-03c0-484d-9ea2-a30607c5f034","Type":"ContainerStarted","Data":"4a9466d89567b8e8f68c8f4f2ffabd9fa972f539ff8ef33b35c7779e9df5ed30"} Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.089949 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpt2" event={"ID":"6fdd485f-526a-4367-ba6d-b68246ed45a0","Type":"ContainerStarted","Data":"2986f7a877dac401806618562c0b2b90cdd4bf46c6974ea4cf74892f8a8f2989"} Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.097854 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" event={"ID":"6351e457-e601-4889-853c-560646bc4b43","Type":"ContainerStarted","Data":"a866e1dc0cdf9d219c08dcf3e56458848af60db2d366b37377c99295e89f231e"} Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.098042 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:42 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:42 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:42 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.098079 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.098726 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.098752 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.187356 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" podStartSLOduration=41.187334936 podStartE2EDuration="41.187334936s" podCreationTimestamp="2026-02-03 12:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:42.177462541 +0000 UTC m=+179.700538415" watchObservedRunningTime="2026-02-03 12:07:42.187334936 +0000 UTC m=+179.710410800" Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.596751 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" podStartSLOduration=159.59672776 podStartE2EDuration="2m39.59672776s" podCreationTimestamp="2026-02-03 12:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:42.572190096 +0000 UTC m=+180.095265970" watchObservedRunningTime="2026-02-03 12:07:42.59672776 +0000 UTC m=+180.119803634" Feb 03 12:07:42 crc kubenswrapper[4820]: I0203 12:07:42.600572 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7"] Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.103459 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:43 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:43 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:43 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.103828 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.110598 4820 generic.go:334] "Generic (PLEG): container finished" podID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" containerID="a94a9983f026ff38b4043d5ab9d2c5ce7684277805e9b23d468e852d606b7983" exitCode=0 Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.110677 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bl6zg" event={"ID":"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5","Type":"ContainerDied","Data":"a94a9983f026ff38b4043d5ab9d2c5ce7684277805e9b23d468e852d606b7983"} Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.112620 4820 generic.go:334] "Generic (PLEG): container finished" podID="682f83dc-ba7f-474f-89d2-6effbcf2806b" containerID="5a2d00a8406b2f79c8b348d38148cdc25692c7c9d425d6f81bbcddd0551302fc" exitCode=0 Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.112682 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dt8ch" event={"ID":"682f83dc-ba7f-474f-89d2-6effbcf2806b","Type":"ContainerDied","Data":"5a2d00a8406b2f79c8b348d38148cdc25692c7c9d425d6f81bbcddd0551302fc"} Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.115523 4820 generic.go:334] "Generic (PLEG): container finished" podID="829fef9f-938d-4d61-9584-bf061063c952" containerID="3b100c1b0f5145a074505dbf8afd2a2cea65699c06772d0ff0c8b909c797f3f7" exitCode=0 Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.115580 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vfzj" event={"ID":"829fef9f-938d-4d61-9584-bf061063c952","Type":"ContainerDied","Data":"3b100c1b0f5145a074505dbf8afd2a2cea65699c06772d0ff0c8b909c797f3f7"} Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.117167 4820 generic.go:334] "Generic (PLEG): container finished" podID="6fdd485f-526a-4367-ba6d-b68246ed45a0" containerID="2986f7a877dac401806618562c0b2b90cdd4bf46c6974ea4cf74892f8a8f2989" exitCode=0 Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.117208 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpt2" event={"ID":"6fdd485f-526a-4367-ba6d-b68246ed45a0","Type":"ContainerDied","Data":"2986f7a877dac401806618562c0b2b90cdd4bf46c6974ea4cf74892f8a8f2989"} Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.119174 4820 generic.go:334] "Generic (PLEG): container finished" podID="030d5842-d0b7-4e4f-ad63-58848630a1ca" containerID="f1f07f57affb00faa6b0fdf3f3962aad642e5b75c0bf957cc1da28152c8fbf2b" exitCode=0 Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.119305 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrlrv" event={"ID":"030d5842-d0b7-4e4f-ad63-58848630a1ca","Type":"ContainerDied","Data":"f1f07f57affb00faa6b0fdf3f3962aad642e5b75c0bf957cc1da28152c8fbf2b"} Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.127960 4820 generic.go:334] "Generic (PLEG): container finished" podID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" containerID="1b8f24cedec5a3c4e2bbb8dcffaf4a7058c60e8ea54679ae1d72085d6ceb9181" exitCode=0 Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.128031 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl5wr" event={"ID":"2341b8b4-d207-4c89-8e46-a1b6b787afc8","Type":"ContainerDied","Data":"1b8f24cedec5a3c4e2bbb8dcffaf4a7058c60e8ea54679ae1d72085d6ceb9181"} Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.136235 4820 generic.go:334] "Generic (PLEG): container finished" podID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" containerID="3986a59082f562ad33e23e77b2b3defb1c3848dd961c1961387c69070fce690e" exitCode=0 Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.136294 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xqrp" event={"ID":"38d510f8-dde9-46b4-965e-9d2726b5f0d7","Type":"ContainerDied","Data":"3986a59082f562ad33e23e77b2b3defb1c3848dd961c1961387c69070fce690e"} Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.138543 4820 generic.go:334] "Generic (PLEG): container finished" podID="f2daa931-03c0-484d-9ea2-a30607c5f034" containerID="4a9466d89567b8e8f68c8f4f2ffabd9fa972f539ff8ef33b35c7779e9df5ed30" exitCode=0 Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.138622 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpgz" event={"ID":"f2daa931-03c0-484d-9ea2-a30607c5f034","Type":"ContainerDied","Data":"4a9466d89567b8e8f68c8f4f2ffabd9fa972f539ff8ef33b35c7779e9df5ed30"} Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.139585 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" event={"ID":"c4859338-3c97-453e-8ef8-db6f786c1172","Type":"ContainerStarted","Data":"0bc0248fd1d1cb0b695f166174de7d5e9a4bb8e31855b5f7a86eb880a1901f2b"} Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.141391 4820 generic.go:334] "Generic (PLEG): container finished" podID="7c9dee9e-4581-4a11-8f12-27a64b26cbf9" containerID="02788b1b10c21af4388272e63b0cdfbaede38bad546c400cb9cac405fdb8aadc" exitCode=0 Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.141871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7c9dee9e-4581-4a11-8f12-27a64b26cbf9","Type":"ContainerDied","Data":"02788b1b10c21af4388272e63b0cdfbaede38bad546c400cb9cac405fdb8aadc"} Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.456970 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bccb6449b-qsmgh"] Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.554054 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.554113 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.554925 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:43 crc kubenswrapper[4820]: I0203 12:07:43.555131 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:43 crc kubenswrapper[4820]: W0203 12:07:43.693177 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf24f6a36_6778_4419_b51e_2e127ffa351a.slice/crio-a1cf989c77b8b5077526756cf492658ea44aaa7e42517653d0598634c4ba9f9a WatchSource:0}: Error finding container a1cf989c77b8b5077526756cf492658ea44aaa7e42517653d0598634c4ba9f9a: Status 404 returned error can't find the container with id a1cf989c77b8b5077526756cf492658ea44aaa7e42517653d0598634c4ba9f9a Feb 03 12:07:44 crc kubenswrapper[4820]: I0203 12:07:44.192265 4820 patch_prober.go:28] interesting pod/console-f9d7485db-tw2nt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 03 12:07:44 crc kubenswrapper[4820]: I0203 12:07:44.192333 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tw2nt" podUID="b06753a3-652a-4acc-b294-3ccaa5b0cb99" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 03 12:07:44 crc kubenswrapper[4820]: I0203 12:07:44.194966 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:44 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:44 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:44 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:44 crc kubenswrapper[4820]: I0203 12:07:44.195032 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:44 crc kubenswrapper[4820]: I0203 12:07:44.224060 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" event={"ID":"f24f6a36-6778-4419-b51e-2e127ffa351a","Type":"ContainerStarted","Data":"a1cf989c77b8b5077526756cf492658ea44aaa7e42517653d0598634c4ba9f9a"} Feb 03 12:07:44 crc kubenswrapper[4820]: I0203 12:07:44.236721 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:44 crc kubenswrapper[4820]: I0203 12:07:44.243774 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-z7vmj" Feb 03 12:07:44 crc kubenswrapper[4820]: I0203 12:07:44.895997 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6ghj" Feb 03 12:07:45 crc kubenswrapper[4820]: I0203 12:07:45.160831 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:45 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:45 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:45 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:45 crc kubenswrapper[4820]: I0203 12:07:45.160935 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:46 crc kubenswrapper[4820]: I0203 12:07:46.704009 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:46 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:46 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:46 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:46 crc kubenswrapper[4820]: I0203 12:07:46.704380 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:46 crc kubenswrapper[4820]: I0203 12:07:46.773473 4820 generic.go:334] "Generic (PLEG): container finished" podID="9d434cee-7d3a-4492-b5f9-071b2527ac8a" containerID="89810e7e5b99ab09919284c9822f2e69a69617417d68c312df4d87d7b73acca7" exitCode=0 Feb 03 12:07:46 crc kubenswrapper[4820]: I0203 12:07:46.773548 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9d434cee-7d3a-4492-b5f9-071b2527ac8a","Type":"ContainerDied","Data":"89810e7e5b99ab09919284c9822f2e69a69617417d68c312df4d87d7b73acca7"} Feb 03 12:07:47 crc kubenswrapper[4820]: I0203 12:07:47.278940 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:47 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:47 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:47 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:47 crc kubenswrapper[4820]: I0203 12:07:47.279020 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.058645 4820 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-qpxpv container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.21:5000/healthz\": dial tcp 10.217.0.21:5000: connect: connection refused" start-of-body= Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.059441 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" podUID="10fa9c2b-e370-400e-9e71-a4617592b411" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.21:5000/healthz\": dial tcp 10.217.0.21:5000: connect: connection refused" Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.090147 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.127739 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7c9dee9e-4581-4a11-8f12-27a64b26cbf9","Type":"ContainerDied","Data":"5942e09bee3a7311aed8ffbde906beddf250f29a63a2b956818f777b1a247d01"} Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.127790 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5942e09bee3a7311aed8ffbde906beddf250f29a63a2b956818f777b1a247d01" Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.127909 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.243046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7vz6k" event={"ID":"6351e457-e601-4889-853c-560646bc4b43","Type":"ContainerStarted","Data":"6553eefb8639511c375d734dfe499e193be4abbfae88fc6caebe5951c1aa3e0f"} Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.243273 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:48 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:48 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:48 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.243308 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.250157 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c9dee9e-4581-4a11-8f12-27a64b26cbf9-kube-api-access\") pod \"7c9dee9e-4581-4a11-8f12-27a64b26cbf9\" (UID: \"7c9dee9e-4581-4a11-8f12-27a64b26cbf9\") " Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.250259 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c9dee9e-4581-4a11-8f12-27a64b26cbf9-kubelet-dir\") pod \"7c9dee9e-4581-4a11-8f12-27a64b26cbf9\" (UID: \"7c9dee9e-4581-4a11-8f12-27a64b26cbf9\") " Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.250654 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c9dee9e-4581-4a11-8f12-27a64b26cbf9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7c9dee9e-4581-4a11-8f12-27a64b26cbf9" (UID: "7c9dee9e-4581-4a11-8f12-27a64b26cbf9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.259240 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" event={"ID":"f24f6a36-6778-4419-b51e-2e127ffa351a","Type":"ContainerStarted","Data":"473e0aeef5ecfe13fb33b46eaa2fef9a59721fc7765bf96825767a2ec97f9e5b"} Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.265164 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.276134 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7vz6k" podStartSLOduration=164.276115178 podStartE2EDuration="2m44.276115178s" podCreationTimestamp="2026-02-03 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:48.275101563 +0000 UTC m=+185.798177437" watchObservedRunningTime="2026-02-03 12:07:48.276115178 +0000 UTC m=+185.799191052" Feb 03 12:07:48 crc kubenswrapper[4820]: I0203 12:07:48.287023 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c9dee9e-4581-4a11-8f12-27a64b26cbf9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7c9dee9e-4581-4a11-8f12-27a64b26cbf9" (UID: "7c9dee9e-4581-4a11-8f12-27a64b26cbf9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:07:49 crc kubenswrapper[4820]: I0203 12:07:49.014254 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" event={"ID":"c4859338-3c97-453e-8ef8-db6f786c1172","Type":"ContainerStarted","Data":"dec753267a2a019f2ac82078468bc8cb300cd6204209e5963d076ff29717861d"} Feb 03 12:07:49 crc kubenswrapper[4820]: I0203 12:07:49.015274 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:07:50 crc kubenswrapper[4820]: I0203 12:07:49.109358 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c9dee9e-4581-4a11-8f12-27a64b26cbf9-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:50 crc kubenswrapper[4820]: I0203 12:07:49.109406 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c9dee9e-4581-4a11-8f12-27a64b26cbf9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:50 crc kubenswrapper[4820]: I0203 12:07:50.116773 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:50 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:50 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:50 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:50 crc kubenswrapper[4820]: I0203 12:07:50.116920 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:50 crc kubenswrapper[4820]: I0203 12:07:50.118225 4820 patch_prober.go:28] interesting pod/controller-manager-5bccb6449b-qsmgh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:50 crc kubenswrapper[4820]: I0203 12:07:50.118258 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" podUID="f24f6a36-6778-4419-b51e-2e127ffa351a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:50 crc kubenswrapper[4820]: I0203 12:07:50.127245 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:50 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:50 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:50 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:50 crc kubenswrapper[4820]: I0203 12:07:50.127318 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:50 crc kubenswrapper[4820]: I0203 12:07:50.130597 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded" start-of-body= Feb 03 12:07:50 crc kubenswrapper[4820]: I0203 12:07:50.130752 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded" Feb 03 12:07:50 crc kubenswrapper[4820]: I0203 12:07:50.132668 4820 patch_prober.go:28] interesting pod/route-controller-manager-5df5f5bb-qqlt7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: i/o timeout" start-of-body= Feb 03 12:07:50 crc kubenswrapper[4820]: I0203 12:07:50.132823 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" podUID="c4859338-3c97-453e-8ef8-db6f786c1172" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: i/o timeout" Feb 03 12:07:50 crc kubenswrapper[4820]: I0203 12:07:50.134091 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:50 crc kubenswrapper[4820]: I0203 12:07:50.141918 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.065600 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.088744 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" podStartSLOduration=15.088725316 podStartE2EDuration="15.088725316s" podCreationTimestamp="2026-02-03 12:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:50.13727972 +0000 UTC m=+187.660355584" watchObservedRunningTime="2026-02-03 12:07:51.088725316 +0000 UTC m=+188.611801180" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.098540 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" podStartSLOduration=15.098519629 podStartE2EDuration="15.098519629s" podCreationTimestamp="2026-02-03 12:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:51.091337788 +0000 UTC m=+188.614413652" watchObservedRunningTime="2026-02-03 12:07:51.098519629 +0000 UTC m=+188.621595493" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.108231 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:51 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:51 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:51 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.108319 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.113397 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 12:07:51 crc kubenswrapper[4820]: E0203 12:07:51.113855 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c9dee9e-4581-4a11-8f12-27a64b26cbf9" containerName="pruner" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.113871 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c9dee9e-4581-4a11-8f12-27a64b26cbf9" containerName="pruner" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.114191 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c9dee9e-4581-4a11-8f12-27a64b26cbf9" containerName="pruner" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.115094 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.132250 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.134131 4820 patch_prober.go:28] interesting pod/route-controller-manager-5df5f5bb-qqlt7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.134173 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" podUID="c4859338-3c97-453e-8ef8-db6f786c1172" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.138472 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.240714 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b341518a-e00b-45eb-a279-d00da0cd6d13-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b341518a-e00b-45eb-a279-d00da0cd6d13\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.241068 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b341518a-e00b-45eb-a279-d00da0cd6d13-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b341518a-e00b-45eb-a279-d00da0cd6d13\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.351315 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.352040 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b341518a-e00b-45eb-a279-d00da0cd6d13-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b341518a-e00b-45eb-a279-d00da0cd6d13\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.352072 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b341518a-e00b-45eb-a279-d00da0cd6d13-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b341518a-e00b-45eb-a279-d00da0cd6d13\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:07:51 crc kubenswrapper[4820]: I0203 12:07:51.356817 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b341518a-e00b-45eb-a279-d00da0cd6d13-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b341518a-e00b-45eb-a279-d00da0cd6d13\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:07:52 crc kubenswrapper[4820]: I0203 12:07:52.133329 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:52 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:52 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:52 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:52 crc kubenswrapper[4820]: I0203 12:07:52.133373 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:52 crc kubenswrapper[4820]: I0203 12:07:52.134327 4820 patch_prober.go:28] interesting pod/route-controller-manager-5df5f5bb-qqlt7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:52 crc kubenswrapper[4820]: I0203 12:07:52.134360 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" podUID="c4859338-3c97-453e-8ef8-db6f786c1172" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:52 crc kubenswrapper[4820]: I0203 12:07:52.312210 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b341518a-e00b-45eb-a279-d00da0cd6d13-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b341518a-e00b-45eb-a279-d00da0cd6d13\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:07:52 crc kubenswrapper[4820]: I0203 12:07:52.428283 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:07:53 crc kubenswrapper[4820]: I0203 12:07:53.188728 4820 patch_prober.go:28] interesting pod/route-controller-manager-5df5f5bb-qqlt7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:53 crc kubenswrapper[4820]: I0203 12:07:53.189137 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" podUID="c4859338-3c97-453e-8ef8-db6f786c1172" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:53 crc kubenswrapper[4820]: I0203 12:07:53.208566 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:53 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:53 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:53 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:53 crc kubenswrapper[4820]: I0203 12:07:53.208627 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:54 crc kubenswrapper[4820]: I0203 12:07:54.115620 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:54 crc kubenswrapper[4820]: I0203 12:07:54.115683 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:54 crc kubenswrapper[4820]: I0203 12:07:54.115794 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:07:54 crc kubenswrapper[4820]: I0203 12:07:54.115936 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:07:54 crc kubenswrapper[4820]: I0203 12:07:54.962586 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:54 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:54 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:54 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:54 crc kubenswrapper[4820]: I0203 12:07:54.962676 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:55 crc kubenswrapper[4820]: I0203 12:07:55.339645 4820 patch_prober.go:28] interesting pod/console-operator-58897d9998-sf69z container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:55 crc kubenswrapper[4820]: I0203 12:07:55.339718 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sf69z" podUID="29d2a7e9-1fcb-4213-ae6c-753953bfae1a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:55 crc kubenswrapper[4820]: I0203 12:07:55.380190 4820 patch_prober.go:28] interesting pod/console-operator-58897d9998-sf69z container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:55 crc kubenswrapper[4820]: I0203 12:07:55.380258 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-sf69z" podUID="29d2a7e9-1fcb-4213-ae6c-753953bfae1a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:55 crc kubenswrapper[4820]: I0203 12:07:55.380475 4820 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-s55v7 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:55 crc kubenswrapper[4820]: I0203 12:07:55.380510 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-s55v7" podUID="2778f3aa-c3cf-471b-bc79-b8ce1a1bbfc7" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:55 crc kubenswrapper[4820]: I0203 12:07:55.386537 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:55 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:55 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:55 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:55 crc kubenswrapper[4820]: I0203 12:07:55.386596 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:55 crc kubenswrapper[4820]: I0203 12:07:55.418356 4820 patch_prober.go:28] interesting pod/console-f9d7485db-tw2nt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 03 12:07:55 crc kubenswrapper[4820]: I0203 12:07:55.418410 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tw2nt" podUID="b06753a3-652a-4acc-b294-3ccaa5b0cb99" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 03 12:07:56 crc kubenswrapper[4820]: I0203 12:07:56.106487 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:56 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:56 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:56 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:56 crc kubenswrapper[4820]: I0203 12:07:56.108718 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:56 crc kubenswrapper[4820]: E0203 12:07:56.112874 4820 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.961s" Feb 03 12:07:56 crc kubenswrapper[4820]: I0203 12:07:56.113066 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:56.333983 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bccb6449b-qsmgh"] Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:56.334035 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7"] Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:56.334332 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" podUID="c4859338-3c97-453e-8ef8-db6f786c1172" containerName="route-controller-manager" containerID="cri-o://dec753267a2a019f2ac82078468bc8cb300cd6204209e5963d076ff29717861d" gracePeriod=30 Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:56.342169 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" podUID="f24f6a36-6778-4419-b51e-2e127ffa351a" containerName="controller-manager" containerID="cri-o://473e0aeef5ecfe13fb33b46eaa2fef9a59721fc7765bf96825767a2ec97f9e5b" gracePeriod=30 Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.255711 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:57 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:57 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:57 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.255790 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.290065 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.292312 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.385244 4820 patch_prober.go:28] interesting pod/route-controller-manager-5df5f5bb-qqlt7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.385465 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" podUID="c4859338-3c97-453e-8ef8-db6f786c1172" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.430365 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.453833 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5f648e21-019c-4ed2-a381-77f0166c5ecc-var-lock\") pod \"installer-9-crc\" (UID: \"5f648e21-019c-4ed2-a381-77f0166c5ecc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.453907 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f648e21-019c-4ed2-a381-77f0166c5ecc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5f648e21-019c-4ed2-a381-77f0166c5ecc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.460690 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f648e21-019c-4ed2-a381-77f0166c5ecc-kube-api-access\") pod \"installer-9-crc\" (UID: \"5f648e21-019c-4ed2-a381-77f0166c5ecc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.585028 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5f648e21-019c-4ed2-a381-77f0166c5ecc-var-lock\") pod \"installer-9-crc\" (UID: \"5f648e21-019c-4ed2-a381-77f0166c5ecc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.585130 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f648e21-019c-4ed2-a381-77f0166c5ecc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5f648e21-019c-4ed2-a381-77f0166c5ecc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.585429 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f648e21-019c-4ed2-a381-77f0166c5ecc-kube-api-access\") pod \"installer-9-crc\" (UID: \"5f648e21-019c-4ed2-a381-77f0166c5ecc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.585469 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5f648e21-019c-4ed2-a381-77f0166c5ecc-var-lock\") pod \"installer-9-crc\" (UID: \"5f648e21-019c-4ed2-a381-77f0166c5ecc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.585588 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f648e21-019c-4ed2-a381-77f0166c5ecc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5f648e21-019c-4ed2-a381-77f0166c5ecc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.809788 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f648e21-019c-4ed2-a381-77f0166c5ecc-kube-api-access\") pod \"installer-9-crc\" (UID: \"5f648e21-019c-4ed2-a381-77f0166c5ecc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:07:57 crc kubenswrapper[4820]: I0203 12:07:57.993762 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:07:58 crc kubenswrapper[4820]: I0203 12:07:58.393094 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:58 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:58 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:58 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:58 crc kubenswrapper[4820]: I0203 12:07:58.393145 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:58 crc kubenswrapper[4820]: I0203 12:07:58.399117 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:07:58 crc kubenswrapper[4820]: I0203 12:07:58.455139 4820 generic.go:334] "Generic (PLEG): container finished" podID="f24f6a36-6778-4419-b51e-2e127ffa351a" containerID="473e0aeef5ecfe13fb33b46eaa2fef9a59721fc7765bf96825767a2ec97f9e5b" exitCode=0 Feb 03 12:07:58 crc kubenswrapper[4820]: I0203 12:07:58.455192 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" event={"ID":"f24f6a36-6778-4419-b51e-2e127ffa351a","Type":"ContainerDied","Data":"473e0aeef5ecfe13fb33b46eaa2fef9a59721fc7765bf96825767a2ec97f9e5b"} Feb 03 12:07:58 crc kubenswrapper[4820]: I0203 12:07:58.462388 4820 generic.go:334] "Generic (PLEG): container finished" podID="c4859338-3c97-453e-8ef8-db6f786c1172" containerID="dec753267a2a019f2ac82078468bc8cb300cd6204209e5963d076ff29717861d" exitCode=0 Feb 03 12:07:58 crc kubenswrapper[4820]: I0203 12:07:58.462470 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" event={"ID":"c4859338-3c97-453e-8ef8-db6f786c1172","Type":"ContainerDied","Data":"dec753267a2a019f2ac82078468bc8cb300cd6204209e5963d076ff29717861d"} Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.045747 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.218685 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d434cee-7d3a-4492-b5f9-071b2527ac8a-kube-api-access\") pod \"9d434cee-7d3a-4492-b5f9-071b2527ac8a\" (UID: \"9d434cee-7d3a-4492-b5f9-071b2527ac8a\") " Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.218825 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d434cee-7d3a-4492-b5f9-071b2527ac8a-kubelet-dir\") pod \"9d434cee-7d3a-4492-b5f9-071b2527ac8a\" (UID: \"9d434cee-7d3a-4492-b5f9-071b2527ac8a\") " Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.220757 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d434cee-7d3a-4492-b5f9-071b2527ac8a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9d434cee-7d3a-4492-b5f9-071b2527ac8a" (UID: "9d434cee-7d3a-4492-b5f9-071b2527ac8a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.227616 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d434cee-7d3a-4492-b5f9-071b2527ac8a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9d434cee-7d3a-4492-b5f9-071b2527ac8a" (UID: "9d434cee-7d3a-4492-b5f9-071b2527ac8a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.242059 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:59 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:07:59 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:07:59 crc kubenswrapper[4820]: healthz check failed Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.242125 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.356355 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d434cee-7d3a-4492-b5f9-071b2527ac8a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.356392 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d434cee-7d3a-4492-b5f9-071b2527ac8a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.480903 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9d434cee-7d3a-4492-b5f9-071b2527ac8a","Type":"ContainerDied","Data":"61e7c93e442de2d50a26a54c1875564dc3e8b8df6a9c0f8e6f3be1aa24a6ba9c"} Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.481212 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61e7c93e442de2d50a26a54c1875564dc3e8b8df6a9c0f8e6f3be1aa24a6ba9c" Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.481185 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.691119 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.986179 4820 patch_prober.go:28] interesting pod/route-controller-manager-5df5f5bb-qqlt7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Feb 03 12:07:59 crc kubenswrapper[4820]: I0203 12:07:59.986228 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" podUID="c4859338-3c97-453e-8ef8-db6f786c1172" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.100440 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:00 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:00 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:00 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.100492 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.598258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b341518a-e00b-45eb-a279-d00da0cd6d13","Type":"ContainerStarted","Data":"89ec8f6423c5cf61c9ee9d81f2821d4bdcfefff22bb363cc73edd4ed69e3ab59"} Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.601380 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.603993 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" event={"ID":"f24f6a36-6778-4419-b51e-2e127ffa351a","Type":"ContainerDied","Data":"a1cf989c77b8b5077526756cf492658ea44aaa7e42517653d0598634c4ba9f9a"} Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.604025 4820 scope.go:117] "RemoveContainer" containerID="473e0aeef5ecfe13fb33b46eaa2fef9a59721fc7765bf96825767a2ec97f9e5b" Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.713676 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-config\") pod \"f24f6a36-6778-4419-b51e-2e127ffa351a\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.713724 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f24f6a36-6778-4419-b51e-2e127ffa351a-serving-cert\") pod \"f24f6a36-6778-4419-b51e-2e127ffa351a\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.713774 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnjlw\" (UniqueName: \"kubernetes.io/projected/f24f6a36-6778-4419-b51e-2e127ffa351a-kube-api-access-jnjlw\") pod \"f24f6a36-6778-4419-b51e-2e127ffa351a\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.713851 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-proxy-ca-bundles\") pod \"f24f6a36-6778-4419-b51e-2e127ffa351a\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.713909 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-client-ca\") pod \"f24f6a36-6778-4419-b51e-2e127ffa351a\" (UID: \"f24f6a36-6778-4419-b51e-2e127ffa351a\") " Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.714843 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-config" (OuterVolumeSpecName: "config") pod "f24f6a36-6778-4419-b51e-2e127ffa351a" (UID: "f24f6a36-6778-4419-b51e-2e127ffa351a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.715211 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-client-ca" (OuterVolumeSpecName: "client-ca") pod "f24f6a36-6778-4419-b51e-2e127ffa351a" (UID: "f24f6a36-6778-4419-b51e-2e127ffa351a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.715376 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f24f6a36-6778-4419-b51e-2e127ffa351a" (UID: "f24f6a36-6778-4419-b51e-2e127ffa351a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.816443 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.816473 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:00 crc kubenswrapper[4820]: I0203 12:08:00.816483 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f24f6a36-6778-4419-b51e-2e127ffa351a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.079250 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.220615 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:01 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:01 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:01 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.221062 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.221645 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f24f6a36-6778-4419-b51e-2e127ffa351a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f24f6a36-6778-4419-b51e-2e127ffa351a" (UID: "f24f6a36-6778-4419-b51e-2e127ffa351a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.222001 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f24f6a36-6778-4419-b51e-2e127ffa351a-kube-api-access-jnjlw" (OuterVolumeSpecName: "kube-api-access-jnjlw") pod "f24f6a36-6778-4419-b51e-2e127ffa351a" (UID: "f24f6a36-6778-4419-b51e-2e127ffa351a"). InnerVolumeSpecName "kube-api-access-jnjlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.224297 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f24f6a36-6778-4419-b51e-2e127ffa351a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.224468 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnjlw\" (UniqueName: \"kubernetes.io/projected/f24f6a36-6778-4419-b51e-2e127ffa351a-kube-api-access-jnjlw\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.433656 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.433716 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.439773 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.628940 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.636204 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bccb6449b-qsmgh" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.654551 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" event={"ID":"c4859338-3c97-453e-8ef8-db6f786c1172","Type":"ContainerDied","Data":"0bc0248fd1d1cb0b695f166174de7d5e9a4bb8e31855b5f7a86eb880a1901f2b"} Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.654618 4820 scope.go:117] "RemoveContainer" containerID="dec753267a2a019f2ac82078468bc8cb300cd6204209e5963d076ff29717861d" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.654745 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.657490 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5f648e21-019c-4ed2-a381-77f0166c5ecc","Type":"ContainerStarted","Data":"f8be106aab2d2d002c1becf3ee64e718a4d85322369274bdd44cdc77b20b7ef2"} Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.960911 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bccb6449b-qsmgh"] Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.972069 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5bccb6449b-qsmgh"] Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.980579 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn"] Feb 03 12:08:01 crc kubenswrapper[4820]: E0203 12:08:01.980983 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d434cee-7d3a-4492-b5f9-071b2527ac8a" containerName="pruner" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.981006 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d434cee-7d3a-4492-b5f9-071b2527ac8a" containerName="pruner" Feb 03 12:08:01 crc kubenswrapper[4820]: E0203 12:08:01.981026 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4859338-3c97-453e-8ef8-db6f786c1172" containerName="route-controller-manager" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.981034 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4859338-3c97-453e-8ef8-db6f786c1172" containerName="route-controller-manager" Feb 03 12:08:01 crc kubenswrapper[4820]: E0203 12:08:01.981043 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f24f6a36-6778-4419-b51e-2e127ffa351a" containerName="controller-manager" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.981052 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f24f6a36-6778-4419-b51e-2e127ffa351a" containerName="controller-manager" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.981290 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4859338-3c97-453e-8ef8-db6f786c1172" containerName="route-controller-manager" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.981312 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d434cee-7d3a-4492-b5f9-071b2527ac8a" containerName="pruner" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.981328 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f24f6a36-6778-4419-b51e-2e127ffa351a" containerName="controller-manager" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.982262 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.984578 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-645448985d-vdjc6"] Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.985344 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.988518 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.988533 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.988526 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.989029 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.989216 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.993480 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 12:08:01 crc kubenswrapper[4820]: I0203 12:08:01.997708 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-645448985d-vdjc6"] Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.004491 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.006145 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn"] Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.011515 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4859338-3c97-453e-8ef8-db6f786c1172-client-ca\") pod \"c4859338-3c97-453e-8ef8-db6f786c1172\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.011587 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4859338-3c97-453e-8ef8-db6f786c1172-serving-cert\") pod \"c4859338-3c97-453e-8ef8-db6f786c1172\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.011635 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4859338-3c97-453e-8ef8-db6f786c1172-config\") pod \"c4859338-3c97-453e-8ef8-db6f786c1172\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.011713 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4tvb\" (UniqueName: \"kubernetes.io/projected/c4859338-3c97-453e-8ef8-db6f786c1172-kube-api-access-g4tvb\") pod \"c4859338-3c97-453e-8ef8-db6f786c1172\" (UID: \"c4859338-3c97-453e-8ef8-db6f786c1172\") " Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.014128 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4859338-3c97-453e-8ef8-db6f786c1172-client-ca" (OuterVolumeSpecName: "client-ca") pod "c4859338-3c97-453e-8ef8-db6f786c1172" (UID: "c4859338-3c97-453e-8ef8-db6f786c1172"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.014309 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4859338-3c97-453e-8ef8-db6f786c1172-config" (OuterVolumeSpecName: "config") pod "c4859338-3c97-453e-8ef8-db6f786c1172" (UID: "c4859338-3c97-453e-8ef8-db6f786c1172"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.015950 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4859338-3c97-453e-8ef8-db6f786c1172-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.015991 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4859338-3c97-453e-8ef8-db6f786c1172-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.023160 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4859338-3c97-453e-8ef8-db6f786c1172-kube-api-access-g4tvb" (OuterVolumeSpecName: "kube-api-access-g4tvb") pod "c4859338-3c97-453e-8ef8-db6f786c1172" (UID: "c4859338-3c97-453e-8ef8-db6f786c1172"). InnerVolumeSpecName "kube-api-access-g4tvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.032454 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4859338-3c97-453e-8ef8-db6f786c1172-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c4859338-3c97-453e-8ef8-db6f786c1172" (UID: "c4859338-3c97-453e-8ef8-db6f786c1172"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.171650 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-config\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.171873 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21113ff0-f43a-4138-9bbe-485e6e54d9a9-config\") pod \"route-controller-manager-557d47bcf4-ztmcn\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.171929 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21113ff0-f43a-4138-9bbe-485e6e54d9a9-serving-cert\") pod \"route-controller-manager-557d47bcf4-ztmcn\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.171967 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-proxy-ca-bundles\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.171988 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-serving-cert\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.172098 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21113ff0-f43a-4138-9bbe-485e6e54d9a9-client-ca\") pod \"route-controller-manager-557d47bcf4-ztmcn\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.172614 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g49nk\" (UniqueName: \"kubernetes.io/projected/21113ff0-f43a-4138-9bbe-485e6e54d9a9-kube-api-access-g49nk\") pod \"route-controller-manager-557d47bcf4-ztmcn\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.172644 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-client-ca\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.172680 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvb46\" (UniqueName: \"kubernetes.io/projected/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-kube-api-access-vvb46\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.172724 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4859338-3c97-453e-8ef8-db6f786c1172-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.172735 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4tvb\" (UniqueName: \"kubernetes.io/projected/c4859338-3c97-453e-8ef8-db6f786c1172-kube-api-access-g4tvb\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.173933 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:02 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:02 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:02 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.173967 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.308272 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-proxy-ca-bundles\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.308316 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-serving-cert\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.308338 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21113ff0-f43a-4138-9bbe-485e6e54d9a9-client-ca\") pod \"route-controller-manager-557d47bcf4-ztmcn\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.308366 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g49nk\" (UniqueName: \"kubernetes.io/projected/21113ff0-f43a-4138-9bbe-485e6e54d9a9-kube-api-access-g49nk\") pod \"route-controller-manager-557d47bcf4-ztmcn\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.308391 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-client-ca\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.308424 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvb46\" (UniqueName: \"kubernetes.io/projected/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-kube-api-access-vvb46\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.308454 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-config\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.308518 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21113ff0-f43a-4138-9bbe-485e6e54d9a9-config\") pod \"route-controller-manager-557d47bcf4-ztmcn\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.308545 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21113ff0-f43a-4138-9bbe-485e6e54d9a9-serving-cert\") pod \"route-controller-manager-557d47bcf4-ztmcn\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.321855 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-proxy-ca-bundles\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.322058 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-client-ca\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.322839 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21113ff0-f43a-4138-9bbe-485e6e54d9a9-client-ca\") pod \"route-controller-manager-557d47bcf4-ztmcn\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.325078 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21113ff0-f43a-4138-9bbe-485e6e54d9a9-config\") pod \"route-controller-manager-557d47bcf4-ztmcn\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.328733 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-serving-cert\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.339256 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21113ff0-f43a-4138-9bbe-485e6e54d9a9-serving-cert\") pod \"route-controller-manager-557d47bcf4-ztmcn\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.341176 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-config\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.346865 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g49nk\" (UniqueName: \"kubernetes.io/projected/21113ff0-f43a-4138-9bbe-485e6e54d9a9-kube-api-access-g49nk\") pod \"route-controller-manager-557d47bcf4-ztmcn\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.347461 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvb46\" (UniqueName: \"kubernetes.io/projected/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-kube-api-access-vvb46\") pod \"controller-manager-645448985d-vdjc6\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.364936 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.372642 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.603498 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7"] Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.938148 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5df5f5bb-qqlt7"] Feb 03 12:08:02 crc kubenswrapper[4820]: I0203 12:08:02.938234 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b341518a-e00b-45eb-a279-d00da0cd6d13","Type":"ContainerStarted","Data":"bb3f78c7eff1c8d59388ea451315c45a7c4e69bedfd875191b02f95fc5c10937"} Feb 03 12:08:03 crc kubenswrapper[4820]: I0203 12:08:02.960625 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=11.960606906 podStartE2EDuration="11.960606906s" podCreationTimestamp="2026-02-03 12:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:08:02.957167885 +0000 UTC m=+200.480243759" watchObservedRunningTime="2026-02-03 12:08:02.960606906 +0000 UTC m=+200.483682770" Feb 03 12:08:03 crc kubenswrapper[4820]: I0203 12:08:03.175986 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:03 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:03 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:03 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:03 crc kubenswrapper[4820]: I0203 12:08:03.176038 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:03 crc kubenswrapper[4820]: I0203 12:08:03.180144 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4859338-3c97-453e-8ef8-db6f786c1172" path="/var/lib/kubelet/pods/c4859338-3c97-453e-8ef8-db6f786c1172/volumes" Feb 03 12:08:03 crc kubenswrapper[4820]: I0203 12:08:03.181048 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f24f6a36-6778-4419-b51e-2e127ffa351a" path="/var/lib/kubelet/pods/f24f6a36-6778-4419-b51e-2e127ffa351a/volumes" Feb 03 12:08:03 crc kubenswrapper[4820]: I0203 12:08:03.608917 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:03 crc kubenswrapper[4820]: I0203 12:08:03.609480 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:03 crc kubenswrapper[4820]: I0203 12:08:03.611469 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:03 crc kubenswrapper[4820]: I0203 12:08:03.611531 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:03 crc kubenswrapper[4820]: I0203 12:08:03.611590 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:08:03 crc kubenswrapper[4820]: I0203 12:08:03.612437 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"8db834c2526dc6ff63acff418e5cb17e8ce94b387b7a719475355b6d34bfc1d1"} pod="openshift-console/downloads-7954f5f757-lnc22" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 03 12:08:03 crc kubenswrapper[4820]: I0203 12:08:03.612807 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:03 crc kubenswrapper[4820]: I0203 12:08:03.612833 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:03 crc kubenswrapper[4820]: I0203 12:08:03.613232 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" containerID="cri-o://8db834c2526dc6ff63acff418e5cb17e8ce94b387b7a719475355b6d34bfc1d1" gracePeriod=2 Feb 03 12:08:04 crc kubenswrapper[4820]: I0203 12:08:04.004125 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn"] Feb 03 12:08:04 crc kubenswrapper[4820]: I0203 12:08:04.013322 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-645448985d-vdjc6"] Feb 03 12:08:04 crc kubenswrapper[4820]: W0203 12:08:04.057177 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21113ff0_f43a_4138_9bbe_485e6e54d9a9.slice/crio-769d352f60201cebf6792c31b1b16320ca39e1b555891cb114cdc0ca1b78a1bb WatchSource:0}: Error finding container 769d352f60201cebf6792c31b1b16320ca39e1b555891cb114cdc0ca1b78a1bb: Status 404 returned error can't find the container with id 769d352f60201cebf6792c31b1b16320ca39e1b555891cb114cdc0ca1b78a1bb Feb 03 12:08:04 crc kubenswrapper[4820]: I0203 12:08:04.097432 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:04 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:04 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:04 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:04 crc kubenswrapper[4820]: I0203 12:08:04.097502 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:04 crc kubenswrapper[4820]: I0203 12:08:04.289168 4820 patch_prober.go:28] interesting pod/console-f9d7485db-tw2nt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 03 12:08:04 crc kubenswrapper[4820]: I0203 12:08:04.289662 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tw2nt" podUID="b06753a3-652a-4acc-b294-3ccaa5b0cb99" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 03 12:08:04 crc kubenswrapper[4820]: I0203 12:08:04.841776 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" event={"ID":"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05","Type":"ContainerStarted","Data":"a47eda5cbea503786bb31b47c9cb16c2185fd7f5d7a61210de2dd8b260182d00"} Feb 03 12:08:04 crc kubenswrapper[4820]: I0203 12:08:04.844788 4820 generic.go:334] "Generic (PLEG): container finished" podID="876c5dc3-b775-45cc-94b6-4339735e9975" containerID="8db834c2526dc6ff63acff418e5cb17e8ce94b387b7a719475355b6d34bfc1d1" exitCode=0 Feb 03 12:08:04 crc kubenswrapper[4820]: I0203 12:08:04.844864 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lnc22" event={"ID":"876c5dc3-b775-45cc-94b6-4339735e9975","Type":"ContainerDied","Data":"8db834c2526dc6ff63acff418e5cb17e8ce94b387b7a719475355b6d34bfc1d1"} Feb 03 12:08:04 crc kubenswrapper[4820]: I0203 12:08:04.844962 4820 scope.go:117] "RemoveContainer" containerID="6c4488126847dd3de8d2a9eb16836456561cd827e52a95954ae608cf60a52482" Feb 03 12:08:04 crc kubenswrapper[4820]: I0203 12:08:04.847730 4820 generic.go:334] "Generic (PLEG): container finished" podID="b341518a-e00b-45eb-a279-d00da0cd6d13" containerID="bb3f78c7eff1c8d59388ea451315c45a7c4e69bedfd875191b02f95fc5c10937" exitCode=0 Feb 03 12:08:04 crc kubenswrapper[4820]: I0203 12:08:04.847876 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b341518a-e00b-45eb-a279-d00da0cd6d13","Type":"ContainerDied","Data":"bb3f78c7eff1c8d59388ea451315c45a7c4e69bedfd875191b02f95fc5c10937"} Feb 03 12:08:04 crc kubenswrapper[4820]: I0203 12:08:04.849732 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" event={"ID":"21113ff0-f43a-4138-9bbe-485e6e54d9a9","Type":"ContainerStarted","Data":"769d352f60201cebf6792c31b1b16320ca39e1b555891cb114cdc0ca1b78a1bb"} Feb 03 12:08:05 crc kubenswrapper[4820]: I0203 12:08:05.249788 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:05 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:05 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:05 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:05 crc kubenswrapper[4820]: I0203 12:08:05.250251 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:06 crc kubenswrapper[4820]: I0203 12:08:06.102075 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:06 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:06 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:06 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:06 crc kubenswrapper[4820]: I0203 12:08:06.102413 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:06 crc kubenswrapper[4820]: I0203 12:08:06.143598 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" event={"ID":"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05","Type":"ContainerStarted","Data":"84a8cfea8877c064fe516848d18a880005f2324d29b6ce26da7f90ed55b78bdd"} Feb 03 12:08:06 crc kubenswrapper[4820]: I0203 12:08:06.144452 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:06 crc kubenswrapper[4820]: I0203 12:08:06.152317 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:08:06 crc kubenswrapper[4820]: I0203 12:08:06.179858 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" podStartSLOduration=13.179826746 podStartE2EDuration="13.179826746s" podCreationTimestamp="2026-02-03 12:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:08:06.170264578 +0000 UTC m=+203.693340452" watchObservedRunningTime="2026-02-03 12:08:06.179826746 +0000 UTC m=+203.702902610" Feb 03 12:08:06 crc kubenswrapper[4820]: I0203 12:08:06.202722 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lnc22" event={"ID":"876c5dc3-b775-45cc-94b6-4339735e9975","Type":"ContainerStarted","Data":"0421ee1506a1a903427646b265d2645490c9bf4f584f7bfabeb5c2a9c107061b"} Feb 03 12:08:06 crc kubenswrapper[4820]: I0203 12:08:06.205448 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:08:06 crc kubenswrapper[4820]: I0203 12:08:06.213749 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:06 crc kubenswrapper[4820]: I0203 12:08:06.213805 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:06 crc kubenswrapper[4820]: I0203 12:08:06.228332 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" event={"ID":"21113ff0-f43a-4138-9bbe-485e6e54d9a9","Type":"ContainerStarted","Data":"2cb33ae2ed073b4048d6eac76b01ca311717471bc247854867f53b9eeba0892a"} Feb 03 12:08:06 crc kubenswrapper[4820]: I0203 12:08:06.228754 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:06 crc kubenswrapper[4820]: I0203 12:08:06.295947 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" podStartSLOduration=12.295912589 podStartE2EDuration="12.295912589s" podCreationTimestamp="2026-02-03 12:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:08:06.285307937 +0000 UTC m=+203.808383821" watchObservedRunningTime="2026-02-03 12:08:06.295912589 +0000 UTC m=+203.818988453" Feb 03 12:08:06 crc kubenswrapper[4820]: I0203 12:08:06.590475 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:08:07 crc kubenswrapper[4820]: I0203 12:08:07.259010 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:07 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:07 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:07 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:07 crc kubenswrapper[4820]: I0203 12:08:07.259321 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:07 crc kubenswrapper[4820]: I0203 12:08:07.309641 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:07 crc kubenswrapper[4820]: I0203 12:08:07.309710 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:07 crc kubenswrapper[4820]: I0203 12:08:07.316008 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5f648e21-019c-4ed2-a381-77f0166c5ecc","Type":"ContainerStarted","Data":"ce3e9a508bff06901b788ecd9affe72a177590e39feef00893cc727fa7f21f49"} Feb 03 12:08:07 crc kubenswrapper[4820]: I0203 12:08:07.557341 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=10.557316892 podStartE2EDuration="10.557316892s" podCreationTimestamp="2026-02-03 12:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:08:07.55509369 +0000 UTC m=+205.078169564" watchObservedRunningTime="2026-02-03 12:08:07.557316892 +0000 UTC m=+205.080392756" Feb 03 12:08:08 crc kubenswrapper[4820]: I0203 12:08:08.096855 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:08 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:08 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:08 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:08 crc kubenswrapper[4820]: I0203 12:08:08.097149 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:08 crc kubenswrapper[4820]: I0203 12:08:08.323634 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b341518a-e00b-45eb-a279-d00da0cd6d13","Type":"ContainerDied","Data":"89ec8f6423c5cf61c9ee9d81f2821d4bdcfefff22bb363cc73edd4ed69e3ab59"} Feb 03 12:08:08 crc kubenswrapper[4820]: I0203 12:08:08.323665 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89ec8f6423c5cf61c9ee9d81f2821d4bdcfefff22bb363cc73edd4ed69e3ab59" Feb 03 12:08:08 crc kubenswrapper[4820]: I0203 12:08:08.324812 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:08 crc kubenswrapper[4820]: I0203 12:08:08.324838 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:08 crc kubenswrapper[4820]: I0203 12:08:08.421588 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:08:08 crc kubenswrapper[4820]: I0203 12:08:08.670305 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b341518a-e00b-45eb-a279-d00da0cd6d13-kubelet-dir\") pod \"b341518a-e00b-45eb-a279-d00da0cd6d13\" (UID: \"b341518a-e00b-45eb-a279-d00da0cd6d13\") " Feb 03 12:08:08 crc kubenswrapper[4820]: I0203 12:08:08.670371 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b341518a-e00b-45eb-a279-d00da0cd6d13-kube-api-access\") pod \"b341518a-e00b-45eb-a279-d00da0cd6d13\" (UID: \"b341518a-e00b-45eb-a279-d00da0cd6d13\") " Feb 03 12:08:08 crc kubenswrapper[4820]: I0203 12:08:08.671008 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b341518a-e00b-45eb-a279-d00da0cd6d13-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b341518a-e00b-45eb-a279-d00da0cd6d13" (UID: "b341518a-e00b-45eb-a279-d00da0cd6d13"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:08:09 crc kubenswrapper[4820]: I0203 12:08:09.418574 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b341518a-e00b-45eb-a279-d00da0cd6d13-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b341518a-e00b-45eb-a279-d00da0cd6d13" (UID: "b341518a-e00b-45eb-a279-d00da0cd6d13"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:09 crc kubenswrapper[4820]: I0203 12:08:09.453189 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b341518a-e00b-45eb-a279-d00da0cd6d13-kube-api-access\") pod \"b341518a-e00b-45eb-a279-d00da0cd6d13\" (UID: \"b341518a-e00b-45eb-a279-d00da0cd6d13\") " Feb 03 12:08:09 crc kubenswrapper[4820]: I0203 12:08:09.455808 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b341518a-e00b-45eb-a279-d00da0cd6d13-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:09 crc kubenswrapper[4820]: W0203 12:08:09.464003 4820 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b341518a-e00b-45eb-a279-d00da0cd6d13/volumes/kubernetes.io~projected/kube-api-access Feb 03 12:08:09 crc kubenswrapper[4820]: I0203 12:08:09.464070 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b341518a-e00b-45eb-a279-d00da0cd6d13-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b341518a-e00b-45eb-a279-d00da0cd6d13" (UID: "b341518a-e00b-45eb-a279-d00da0cd6d13"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:09 crc kubenswrapper[4820]: I0203 12:08:09.470063 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:09 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:09 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:09 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:09 crc kubenswrapper[4820]: I0203 12:08:09.470120 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:10 crc kubenswrapper[4820]: I0203 12:08:09.503313 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:08:10 crc kubenswrapper[4820]: I0203 12:08:10.052933 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b341518a-e00b-45eb-a279-d00da0cd6d13-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:10 crc kubenswrapper[4820]: I0203 12:08:10.054369 4820 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod05797a22-690b-4b36-8b4e-5dcc739f7cad"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod05797a22-690b-4b36-8b4e-5dcc739f7cad] : Timed out while waiting for systemd to remove kubepods-burstable-pod05797a22_690b_4b36_8b4e_5dcc739f7cad.slice" Feb 03 12:08:10 crc kubenswrapper[4820]: I0203 12:08:10.059805 4820 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod05797a22-690b-4b36-8b4e-5dcc739f7cad"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod05797a22-690b-4b36-8b4e-5dcc739f7cad] : Timed out while waiting for systemd to remove kubepods-burstable-pod05797a22_690b_4b36_8b4e_5dcc739f7cad.slice" Feb 03 12:08:10 crc kubenswrapper[4820]: E0203 12:08:10.059873 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod05797a22-690b-4b36-8b4e-5dcc739f7cad] : unable to destroy cgroup paths for cgroup [kubepods burstable pod05797a22-690b-4b36-8b4e-5dcc739f7cad] : Timed out while waiting for systemd to remove kubepods-burstable-pod05797a22_690b_4b36_8b4e_5dcc739f7cad.slice" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" podUID="05797a22-690b-4b36-8b4e-5dcc739f7cad" Feb 03 12:08:10 crc kubenswrapper[4820]: I0203 12:08:10.235185 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:10 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:10 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:10 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:10 crc kubenswrapper[4820]: I0203 12:08:10.235255 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:11 crc kubenswrapper[4820]: I0203 12:08:11.039097 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg" Feb 03 12:08:11 crc kubenswrapper[4820]: I0203 12:08:11.835877 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:11 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:11 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:11 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:11 crc kubenswrapper[4820]: I0203 12:08:11.835977 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:12 crc kubenswrapper[4820]: I0203 12:08:12.106119 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg"] Feb 03 12:08:13 crc kubenswrapper[4820]: I0203 12:08:13.504980 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:08:13 crc kubenswrapper[4820]: I0203 12:08:13.505031 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:08:13 crc kubenswrapper[4820]: I0203 12:08:13.557006 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cs8dg"] Feb 03 12:08:13 crc kubenswrapper[4820]: I0203 12:08:13.584809 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:13 crc kubenswrapper[4820]: I0203 12:08:13.584809 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:13 crc kubenswrapper[4820]: I0203 12:08:13.584862 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:13 crc kubenswrapper[4820]: I0203 12:08:13.584860 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:13 crc kubenswrapper[4820]: I0203 12:08:13.774234 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:13 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:13 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:13 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:13 crc kubenswrapper[4820]: I0203 12:08:13.774320 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:14 crc kubenswrapper[4820]: I0203 12:08:14.097680 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:14 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:14 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:14 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:14 crc kubenswrapper[4820]: I0203 12:08:14.097736 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:14 crc kubenswrapper[4820]: I0203 12:08:14.189130 4820 patch_prober.go:28] interesting pod/console-f9d7485db-tw2nt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 03 12:08:14 crc kubenswrapper[4820]: I0203 12:08:14.189213 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tw2nt" podUID="b06753a3-652a-4acc-b294-3ccaa5b0cb99" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 03 12:08:15 crc kubenswrapper[4820]: I0203 12:08:15.667504 4820 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4gskq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": context deadline exceeded" start-of-body= Feb 03 12:08:15 crc kubenswrapper[4820]: I0203 12:08:15.668001 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" podUID="0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": context deadline exceeded" Feb 03 12:08:15 crc kubenswrapper[4820]: I0203 12:08:15.667759 4820 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4gskq container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:08:15 crc kubenswrapper[4820]: I0203 12:08:15.683586 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" podUID="0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:08:15 crc kubenswrapper[4820]: I0203 12:08:15.768793 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:15 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:15 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:15 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:15 crc kubenswrapper[4820]: I0203 12:08:15.768849 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:15 crc kubenswrapper[4820]: I0203 12:08:15.806812 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05797a22-690b-4b36-8b4e-5dcc739f7cad" path="/var/lib/kubelet/pods/05797a22-690b-4b36-8b4e-5dcc739f7cad/volumes" Feb 03 12:08:16 crc kubenswrapper[4820]: I0203 12:08:16.664830 4820 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-c7gsf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:08:16 crc kubenswrapper[4820]: I0203 12:08:16.665184 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c7gsf" podUID="52996a75-b03e-40f5-a587-2c1476910cd4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:08:16 crc kubenswrapper[4820]: I0203 12:08:16.670482 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:16 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:16 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:16 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:16 crc kubenswrapper[4820]: I0203 12:08:16.670524 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:16 crc kubenswrapper[4820]: I0203 12:08:16.684918 4820 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8rb2x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:08:16 crc kubenswrapper[4820]: I0203 12:08:16.684968 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" podUID="d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:08:16 crc kubenswrapper[4820]: I0203 12:08:16.689868 4820 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-8q9q7 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:08:16 crc kubenswrapper[4820]: I0203 12:08:16.689939 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" podUID="577cff0c-0386-467f-8a44-314a922051e2" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:08:16 crc kubenswrapper[4820]: I0203 12:08:16.732717 4820 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-8q9q7 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:08:16 crc kubenswrapper[4820]: I0203 12:08:16.732816 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8q9q7" podUID="577cff0c-0386-467f-8a44-314a922051e2" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 03 12:08:16 crc kubenswrapper[4820]: I0203 12:08:16.732946 4820 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8rb2x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:08:16 crc kubenswrapper[4820]: I0203 12:08:16.733047 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8rb2x" podUID="d45799eb-72ff-43ba-9ca3-4eb5d44bc3a5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:08:16 crc kubenswrapper[4820]: I0203 12:08:16.771448 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" podUID="b460558b-ba3e-4543-bb57-debddb0711e7" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:17 crc kubenswrapper[4820]: I0203 12:08:17.229445 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:17 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:17 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:17 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:17 crc kubenswrapper[4820]: I0203 12:08:17.229856 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:19 crc kubenswrapper[4820]: I0203 12:08:19.894016 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:19 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 03 12:08:19 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:19 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:19 crc kubenswrapper[4820]: I0203 12:08:19.894088 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:20 crc kubenswrapper[4820]: I0203 12:08:20.110648 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:20 crc kubenswrapper[4820]: [+]has-synced ok Feb 03 12:08:20 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:20 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:20 crc kubenswrapper[4820]: I0203 12:08:20.111830 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:21 crc kubenswrapper[4820]: I0203 12:08:21.101519 4820 patch_prober.go:28] interesting pod/router-default-5444994796-h22tk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:21 crc kubenswrapper[4820]: [+]has-synced ok Feb 03 12:08:21 crc kubenswrapper[4820]: [+]process-running ok Feb 03 12:08:21 crc kubenswrapper[4820]: healthz check failed Feb 03 12:08:21 crc kubenswrapper[4820]: I0203 12:08:21.101652 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h22tk" podUID="a227a161-8e53-4817-b7b2-48206c4916fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:22 crc kubenswrapper[4820]: I0203 12:08:22.121800 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:08:22 crc kubenswrapper[4820]: I0203 12:08:22.126575 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h22tk" Feb 03 12:08:23 crc kubenswrapper[4820]: I0203 12:08:23.559652 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:23 crc kubenswrapper[4820]: I0203 12:08:23.560039 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:23 crc kubenswrapper[4820]: I0203 12:08:23.561852 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:23 crc kubenswrapper[4820]: I0203 12:08:23.561914 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:24 crc kubenswrapper[4820]: I0203 12:08:24.338620 4820 patch_prober.go:28] interesting pod/console-f9d7485db-tw2nt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 03 12:08:24 crc kubenswrapper[4820]: I0203 12:08:24.338825 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tw2nt" podUID="b06753a3-652a-4acc-b294-3ccaa5b0cb99" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 03 12:08:31 crc kubenswrapper[4820]: I0203 12:08:31.458379 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:08:31 crc kubenswrapper[4820]: I0203 12:08:31.459194 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:08:31 crc kubenswrapper[4820]: I0203 12:08:31.459257 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:08:31 crc kubenswrapper[4820]: I0203 12:08:31.460140 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:08:31 crc kubenswrapper[4820]: I0203 12:08:31.460207 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d" gracePeriod=600 Feb 03 12:08:32 crc kubenswrapper[4820]: I0203 12:08:32.936516 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d" exitCode=0 Feb 03 12:08:32 crc kubenswrapper[4820]: I0203 12:08:32.936585 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d"} Feb 03 12:08:33 crc kubenswrapper[4820]: I0203 12:08:33.554435 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:33 crc kubenswrapper[4820]: I0203 12:08:33.554880 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:33 crc kubenswrapper[4820]: I0203 12:08:33.554956 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:08:33 crc kubenswrapper[4820]: I0203 12:08:33.555880 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"0421ee1506a1a903427646b265d2645490c9bf4f584f7bfabeb5c2a9c107061b"} pod="openshift-console/downloads-7954f5f757-lnc22" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 03 12:08:33 crc kubenswrapper[4820]: I0203 12:08:33.555944 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" containerID="cri-o://0421ee1506a1a903427646b265d2645490c9bf4f584f7bfabeb5c2a9c107061b" gracePeriod=2 Feb 03 12:08:33 crc kubenswrapper[4820]: I0203 12:08:33.559087 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:33 crc kubenswrapper[4820]: I0203 12:08:33.559128 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:33 crc kubenswrapper[4820]: I0203 12:08:33.559503 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:33 crc kubenswrapper[4820]: I0203 12:08:33.559529 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:34 crc kubenswrapper[4820]: I0203 12:08:34.329086 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:08:34 crc kubenswrapper[4820]: I0203 12:08:34.338093 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:08:35 crc kubenswrapper[4820]: I0203 12:08:35.149509 4820 generic.go:334] "Generic (PLEG): container finished" podID="876c5dc3-b775-45cc-94b6-4339735e9975" containerID="0421ee1506a1a903427646b265d2645490c9bf4f584f7bfabeb5c2a9c107061b" exitCode=0 Feb 03 12:08:35 crc kubenswrapper[4820]: I0203 12:08:35.156512 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lnc22" event={"ID":"876c5dc3-b775-45cc-94b6-4339735e9975","Type":"ContainerDied","Data":"0421ee1506a1a903427646b265d2645490c9bf4f584f7bfabeb5c2a9c107061b"} Feb 03 12:08:35 crc kubenswrapper[4820]: I0203 12:08:35.156587 4820 scope.go:117] "RemoveContainer" containerID="8db834c2526dc6ff63acff418e5cb17e8ce94b387b7a719475355b6d34bfc1d1" Feb 03 12:08:43 crc kubenswrapper[4820]: I0203 12:08:43.555740 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:43 crc kubenswrapper[4820]: I0203 12:08:43.556332 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.581634 4820 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.582198 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8" gracePeriod=15 Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.582339 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b" gracePeriod=15 Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.582389 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640" gracePeriod=15 Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.582431 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289" gracePeriod=15 Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.582461 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2" gracePeriod=15 Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.582830 4820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 12:08:46 crc kubenswrapper[4820]: E0203 12:08:46.583094 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b341518a-e00b-45eb-a279-d00da0cd6d13" containerName="pruner" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583121 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b341518a-e00b-45eb-a279-d00da0cd6d13" containerName="pruner" Feb 03 12:08:46 crc kubenswrapper[4820]: E0203 12:08:46.583136 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583151 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 12:08:46 crc kubenswrapper[4820]: E0203 12:08:46.583163 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583172 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:08:46 crc kubenswrapper[4820]: E0203 12:08:46.583195 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583201 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:08:46 crc kubenswrapper[4820]: E0203 12:08:46.583212 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583219 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 12:08:46 crc kubenswrapper[4820]: E0203 12:08:46.583236 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583242 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 03 12:08:46 crc kubenswrapper[4820]: E0203 12:08:46.583261 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583270 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 12:08:46 crc kubenswrapper[4820]: E0203 12:08:46.583282 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583292 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583448 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583475 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b341518a-e00b-45eb-a279-d00da0cd6d13" containerName="pruner" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583487 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583503 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583520 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583536 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583550 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583568 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:08:46 crc kubenswrapper[4820]: E0203 12:08:46.583732 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.583745 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.585393 4820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.586052 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.659796 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.725155 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.725272 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.725352 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.725465 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.725597 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.725618 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.725646 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.725726 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.827835 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.827865 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.827962 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.828006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.828111 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.828128 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.828157 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.828192 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.828141 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.828252 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.828246 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.828275 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.828341 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.828312 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.828386 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.828414 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.879317 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.881910 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 12:08:46 crc kubenswrapper[4820]: I0203 12:08:46.883090 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2" exitCode=2 Feb 03 12:08:47 crc kubenswrapper[4820]: I0203 12:08:47.903992 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 03 12:08:47 crc kubenswrapper[4820]: I0203 12:08:47.905808 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 12:08:47 crc kubenswrapper[4820]: I0203 12:08:47.906741 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b" exitCode=0 Feb 03 12:08:47 crc kubenswrapper[4820]: I0203 12:08:47.906767 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640" exitCode=0 Feb 03 12:08:49 crc kubenswrapper[4820]: I0203 12:08:49.093021 4820 generic.go:334] "Generic (PLEG): container finished" podID="5f648e21-019c-4ed2-a381-77f0166c5ecc" containerID="ce3e9a508bff06901b788ecd9affe72a177590e39feef00893cc727fa7f21f49" exitCode=0 Feb 03 12:08:49 crc kubenswrapper[4820]: I0203 12:08:49.093119 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5f648e21-019c-4ed2-a381-77f0166c5ecc","Type":"ContainerDied","Data":"ce3e9a508bff06901b788ecd9affe72a177590e39feef00893cc727fa7f21f49"} Feb 03 12:08:49 crc kubenswrapper[4820]: I0203 12:08:49.094192 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:08:49 crc kubenswrapper[4820]: I0203 12:08:49.096728 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 03 12:08:49 crc kubenswrapper[4820]: I0203 12:08:49.099618 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 12:08:49 crc kubenswrapper[4820]: I0203 12:08:49.100673 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289" exitCode=0 Feb 03 12:08:50 crc kubenswrapper[4820]: I0203 12:08:50.120490 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 03 12:08:50 crc kubenswrapper[4820]: I0203 12:08:50.124689 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 12:08:50 crc kubenswrapper[4820]: I0203 12:08:50.125444 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8" exitCode=0 Feb 03 12:08:51 crc kubenswrapper[4820]: E0203 12:08:51.991019 4820 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:51 crc kubenswrapper[4820]: I0203 12:08:51.991598 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:08:53 crc kubenswrapper[4820]: I0203 12:08:53.255431 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:08:53 crc kubenswrapper[4820]: I0203 12:08:53.554155 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:08:53 crc kubenswrapper[4820]: I0203 12:08:53.554203 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:08:54 crc kubenswrapper[4820]: E0203 12:08:54.734628 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:08:54 crc kubenswrapper[4820]: E0203 12:08:54.736138 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:08:54 crc kubenswrapper[4820]: E0203 12:08:54.736610 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:08:54 crc kubenswrapper[4820]: E0203 12:08:54.736947 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:08:54 crc kubenswrapper[4820]: E0203 12:08:54.737258 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:08:54 crc kubenswrapper[4820]: I0203 12:08:54.737288 4820 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 03 12:08:54 crc kubenswrapper[4820]: E0203 12:08:54.737579 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Feb 03 12:08:54 crc kubenswrapper[4820]: E0203 12:08:54.939968 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Feb 03 12:08:55 crc kubenswrapper[4820]: E0203 12:08:55.341333 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Feb 03 12:08:56 crc kubenswrapper[4820]: E0203 12:08:56.142846 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Feb 03 12:08:57 crc kubenswrapper[4820]: E0203 12:08:57.743791 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Feb 03 12:08:59 crc kubenswrapper[4820]: E0203 12:08:59.593573 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-qj7xr.1890bb0a01019d8e\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{machine-config-daemon-qj7xr.1890bb0a01019d8e openshift-machine-config-operator 26704 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-qj7xr,UID:2c02def6-29f2-448e-80ec-0c8ee290f053,APIVersion:v1,ResourceVersion:26697,FieldPath:spec.containers{machine-config-daemon},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 12:05:05 +0000 UTC,LastTimestamp:2026-02-03 12:08:59.59306906 +0000 UTC m=+257.116144924,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 12:08:59 crc kubenswrapper[4820]: I0203 12:08:59.600975 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5f648e21-019c-4ed2-a381-77f0166c5ecc","Type":"ContainerDied","Data":"f8be106aab2d2d002c1becf3ee64e718a4d85322369274bdd44cdc77b20b7ef2"} Feb 03 12:08:59 crc kubenswrapper[4820]: I0203 12:08:59.601019 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8be106aab2d2d002c1becf3ee64e718a4d85322369274bdd44cdc77b20b7ef2" Feb 03 12:08:59 crc kubenswrapper[4820]: I0203 12:08:59.639983 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:08:59 crc kubenswrapper[4820]: I0203 12:08:59.640571 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:08:59 crc kubenswrapper[4820]: I0203 12:08:59.807477 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f648e21-019c-4ed2-a381-77f0166c5ecc-kube-api-access\") pod \"5f648e21-019c-4ed2-a381-77f0166c5ecc\" (UID: \"5f648e21-019c-4ed2-a381-77f0166c5ecc\") " Feb 03 12:08:59 crc kubenswrapper[4820]: I0203 12:08:59.807543 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5f648e21-019c-4ed2-a381-77f0166c5ecc-var-lock\") pod \"5f648e21-019c-4ed2-a381-77f0166c5ecc\" (UID: \"5f648e21-019c-4ed2-a381-77f0166c5ecc\") " Feb 03 12:08:59 crc kubenswrapper[4820]: I0203 12:08:59.807569 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f648e21-019c-4ed2-a381-77f0166c5ecc-kubelet-dir\") pod \"5f648e21-019c-4ed2-a381-77f0166c5ecc\" (UID: \"5f648e21-019c-4ed2-a381-77f0166c5ecc\") " Feb 03 12:08:59 crc kubenswrapper[4820]: I0203 12:08:59.807770 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f648e21-019c-4ed2-a381-77f0166c5ecc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5f648e21-019c-4ed2-a381-77f0166c5ecc" (UID: "5f648e21-019c-4ed2-a381-77f0166c5ecc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:08:59 crc kubenswrapper[4820]: I0203 12:08:59.807785 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f648e21-019c-4ed2-a381-77f0166c5ecc-var-lock" (OuterVolumeSpecName: "var-lock") pod "5f648e21-019c-4ed2-a381-77f0166c5ecc" (UID: "5f648e21-019c-4ed2-a381-77f0166c5ecc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:08:59 crc kubenswrapper[4820]: I0203 12:08:59.814251 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f648e21-019c-4ed2-a381-77f0166c5ecc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5f648e21-019c-4ed2-a381-77f0166c5ecc" (UID: "5f648e21-019c-4ed2-a381-77f0166c5ecc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:59 crc kubenswrapper[4820]: I0203 12:08:59.909073 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f648e21-019c-4ed2-a381-77f0166c5ecc-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:59 crc kubenswrapper[4820]: I0203 12:08:59.909115 4820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5f648e21-019c-4ed2-a381-77f0166c5ecc-var-lock\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:59 crc kubenswrapper[4820]: I0203 12:08:59.909124 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f648e21-019c-4ed2-a381-77f0166c5ecc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:00 crc kubenswrapper[4820]: I0203 12:09:00.609527 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 12:09:00 crc kubenswrapper[4820]: I0203 12:09:00.609784 4820 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661" exitCode=1 Feb 03 12:09:00 crc kubenswrapper[4820]: I0203 12:09:00.609849 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:09:00 crc kubenswrapper[4820]: I0203 12:09:00.609896 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661"} Feb 03 12:09:00 crc kubenswrapper[4820]: I0203 12:09:00.610588 4820 scope.go:117] "RemoveContainer" containerID="06e5365791987f1586b93dfe4db8bfe19415d8c400d6c0d75f312cc0f62f7661" Feb 03 12:09:00 crc kubenswrapper[4820]: I0203 12:09:00.611253 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:00 crc kubenswrapper[4820]: I0203 12:09:00.611622 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:00 crc kubenswrapper[4820]: I0203 12:09:00.622536 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:00 crc kubenswrapper[4820]: I0203 12:09:00.623089 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:00 crc kubenswrapper[4820]: E0203 12:09:00.944879 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="6.4s" Feb 03 12:09:03 crc kubenswrapper[4820]: I0203 12:09:03.145593 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:03 crc kubenswrapper[4820]: I0203 12:09:03.146467 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:03 crc kubenswrapper[4820]: E0203 12:09:03.159130 4820 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" volumeName="registry-storage" Feb 03 12:09:03 crc kubenswrapper[4820]: I0203 12:09:03.554197 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:03 crc kubenswrapper[4820]: I0203 12:09:03.554255 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:03 crc kubenswrapper[4820]: I0203 12:09:03.886959 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:09:05 crc kubenswrapper[4820]: I0203 12:09:05.985259 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:09:07 crc kubenswrapper[4820]: E0203 12:09:07.345833 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="7s" Feb 03 12:09:07 crc kubenswrapper[4820]: E0203 12:09:07.751689 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-qj7xr.1890bb0a01019d8e\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{machine-config-daemon-qj7xr.1890bb0a01019d8e openshift-machine-config-operator 26704 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-qj7xr,UID:2c02def6-29f2-448e-80ec-0c8ee290f053,APIVersion:v1,ResourceVersion:26697,FieldPath:spec.containers{machine-config-daemon},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 12:05:05 +0000 UTC,LastTimestamp:2026-02-03 12:08:59.59306906 +0000 UTC m=+257.116144924,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 12:09:09 crc kubenswrapper[4820]: I0203 12:09:09.698011 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:09:09 crc kubenswrapper[4820]: E0203 12:09:09.960232 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 03 12:09:09 crc kubenswrapper[4820]: E0203 12:09:09.960418 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2psp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zrlrv_openshift-marketplace(030d5842-d0b7-4e4f-ad63-58848630a1ca): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:09:09 crc kubenswrapper[4820]: E0203 12:09:09.961627 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zrlrv" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" Feb 03 12:09:10 crc kubenswrapper[4820]: I0203 12:09:10.864600 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:10 crc kubenswrapper[4820]: I0203 12:09:10.864870 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:10 crc kubenswrapper[4820]: I0203 12:09:10.865133 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:11 crc kubenswrapper[4820]: E0203 12:09:11.325741 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zrlrv" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" Feb 03 12:09:11 crc kubenswrapper[4820]: E0203 12:09:11.385162 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 03 12:09:11 crc kubenswrapper[4820]: E0203 12:09:11.385344 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kb8lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dvpt2_openshift-marketplace(6fdd485f-526a-4367-ba6d-b68246ed45a0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:09:11 crc kubenswrapper[4820]: E0203 12:09:11.386545 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dvpt2" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" Feb 03 12:09:11 crc kubenswrapper[4820]: I0203 12:09:11.868677 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:11 crc kubenswrapper[4820]: I0203 12:09:11.869147 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:11 crc kubenswrapper[4820]: I0203 12:09:11.870046 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:11 crc kubenswrapper[4820]: I0203 12:09:11.870349 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: E0203 12:09:13.014611 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dvpt2" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.073084 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.074406 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.075362 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.076369 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.076788 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.077204 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.077477 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.077736 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: E0203 12:09:13.099987 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 03 12:09:13 crc kubenswrapper[4820]: E0203 12:09:13.100129 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k6rdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dt8ch_openshift-marketplace(682f83dc-ba7f-474f-89d2-6effbcf2806b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:09:13 crc kubenswrapper[4820]: E0203 12:09:13.101312 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dt8ch" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" Feb 03 12:09:13 crc kubenswrapper[4820]: E0203 12:09:13.116168 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 03 12:09:13 crc kubenswrapper[4820]: E0203 12:09:13.116322 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl98q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bl6zg_openshift-marketplace(ef96ca29-ba6e-42c7-b992-898fb5f7f7b5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:09:13 crc kubenswrapper[4820]: E0203 12:09:13.117608 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-bl6zg" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" Feb 03 12:09:13 crc kubenswrapper[4820]: E0203 12:09:13.125307 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 03 12:09:13 crc kubenswrapper[4820]: E0203 12:09:13.125433 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-544nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ntpgz_openshift-marketplace(f2daa931-03c0-484d-9ea2-a30607c5f034): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:09:13 crc kubenswrapper[4820]: E0203 12:09:13.127511 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ntpgz" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.144359 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.145020 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.145392 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.145624 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.145877 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.183219 4820 scope.go:117] "RemoveContainer" containerID="38554e01896341101d88815c0d8cd45a0114da19790e02a53b5ba3a91ee70e37" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.227389 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.227540 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.227781 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.227883 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.227881 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.227924 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.228403 4820 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.228430 4820 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.228443 4820 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:13 crc kubenswrapper[4820]: W0203 12:09:13.269190 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-707c6ce4ebb6e9f8d77675bfd03aca17f77b98bbb8155ca680610cda11210b80 WatchSource:0}: Error finding container 707c6ce4ebb6e9f8d77675bfd03aca17f77b98bbb8155ca680610cda11210b80: Status 404 returned error can't find the container with id 707c6ce4ebb6e9f8d77675bfd03aca17f77b98bbb8155ca680610cda11210b80 Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.553869 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.554197 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.881507 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xqrp" event={"ID":"38d510f8-dde9-46b4-965e-9d2726b5f0d7","Type":"ContainerStarted","Data":"c8c5d6571927fff3f81a8cbd8943359663af34ac678f3d57daac16e996ac8918"} Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.882217 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.882645 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.882962 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.883214 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.883733 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.885515 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.886687 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5739e0f9e6cbaf81d74275e50c4c284990e04796d7a646007300f47f75171890"} Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.886814 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.887273 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.888075 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.889249 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.890066 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.891151 4820 scope.go:117] "RemoveContainer" containerID="672f46bf03b8e74586edeb641cb45d9d7d8d4389c0b2cbb7a8eec43d6b4d801b" Feb 03 12:09:13 crc kubenswrapper[4820]: I0203 12:09:13.891178 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.027500 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.028113 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.028720 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.029400 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.030227 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.030622 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.048572 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lnc22" event={"ID":"876c5dc3-b775-45cc-94b6-4339735e9975","Type":"ContainerStarted","Data":"56aef8c90b0a1ab28039fb85f961579b2cf433b0c4e7f3fee00ce087be14448e"} Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.049023 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.049079 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.049115 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.049946 4820 status_manager.go:851] "Failed to get status for pod" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" pod="openshift-console/downloads-7954f5f757-lnc22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-lnc22\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.055494 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.055817 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.056020 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.056178 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.056338 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.056520 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.057066 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f1804402597e3d17ee8eb64d90ed036a33a9dc26a6a1a7b3b14474fbae6bf1a8"} Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.057111 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"707c6ce4ebb6e9f8d77675bfd03aca17f77b98bbb8155ca680610cda11210b80"} Feb 03 12:09:14 crc kubenswrapper[4820]: E0203 12:09:14.066365 4820 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.066370 4820 status_manager.go:851] "Failed to get status for pod" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" pod="openshift-console/downloads-7954f5f757-lnc22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-lnc22\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.066953 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.067123 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.067291 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.072055 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.072549 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.072967 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.077024 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.077229 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.077402 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.077562 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.077716 4820 status_manager.go:851] "Failed to get status for pod" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" pod="openshift-console/downloads-7954f5f757-lnc22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-lnc22\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.077865 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.078088 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.289588 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl5wr" event={"ID":"2341b8b4-d207-4c89-8e46-a1b6b787afc8","Type":"ContainerStarted","Data":"69618bb83b381d85f3796db35424c0e893b61f0ed5bf0899ab50a285b23b25a1"} Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.291146 4820 status_manager.go:851] "Failed to get status for pod" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" pod="openshift-console/downloads-7954f5f757-lnc22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-lnc22\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.291347 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.296754 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.297390 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.297731 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vfzj" event={"ID":"829fef9f-938d-4d61-9584-bf061063c952","Type":"ContainerStarted","Data":"4ae1535472b9e25c48a491b772148ec9aa3f2ffbfa3bf03f701721a0bdb7d923"} Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.299933 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.302266 4820 status_manager.go:851] "Failed to get status for pod" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" pod="openshift-marketplace/redhat-operators-pl5wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pl5wr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.303230 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.304835 4820 scope.go:117] "RemoveContainer" containerID="60992833ab89aebd2446f960ab6ae3565f9a17f4e86921d3b2bc68adc9704640" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.309391 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.310144 4820 status_manager.go:851] "Failed to get status for pod" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" pod="openshift-console/downloads-7954f5f757-lnc22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-lnc22\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.310527 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.310795 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.310958 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.311227 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.311415 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.311563 4820 status_manager.go:851] "Failed to get status for pod" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" pod="openshift-marketplace/redhat-operators-pl5wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pl5wr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.311704 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.311840 4820 status_manager.go:851] "Failed to get status for pod" podUID="829fef9f-938d-4d61-9584-bf061063c952" pod="openshift-marketplace/community-operators-5vfzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5vfzj\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.321010 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"4e6a324869d2f58d634802d3f06668e5da2b1da1808292287787329971cfd4aa"} Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.321559 4820 status_manager.go:851] "Failed to get status for pod" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" pod="openshift-console/downloads-7954f5f757-lnc22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-lnc22\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.321724 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.321867 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.322674 4820 status_manager.go:851] "Failed to get status for pod" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" pod="openshift-marketplace/certified-operators-ntpgz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ntpgz\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.322949 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.323382 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.323530 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.323668 4820 status_manager.go:851] "Failed to get status for pod" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" pod="openshift-marketplace/redhat-operators-pl5wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pl5wr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.323818 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.323976 4820 status_manager.go:851] "Failed to get status for pod" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" pod="openshift-marketplace/redhat-marketplace-bl6zg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bl6zg\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.324109 4820 status_manager.go:851] "Failed to get status for pod" podUID="829fef9f-938d-4d61-9584-bf061063c952" pod="openshift-marketplace/community-operators-5vfzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5vfzj\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.324366 4820 status_manager.go:851] "Failed to get status for pod" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-qj7xr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.324501 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.324659 4820 status_manager.go:851] "Failed to get status for pod" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" pod="openshift-marketplace/redhat-marketplace-bl6zg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bl6zg\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.324828 4820 status_manager.go:851] "Failed to get status for pod" podUID="829fef9f-938d-4d61-9584-bf061063c952" pod="openshift-marketplace/community-operators-5vfzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5vfzj\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.324983 4820 status_manager.go:851] "Failed to get status for pod" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" pod="openshift-marketplace/certified-operators-dt8ch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dt8ch\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.325121 4820 status_manager.go:851] "Failed to get status for pod" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" pod="openshift-console/downloads-7954f5f757-lnc22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-lnc22\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.325253 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.325462 4820 status_manager.go:851] "Failed to get status for pod" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" pod="openshift-marketplace/certified-operators-ntpgz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ntpgz\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.325598 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.325725 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.325855 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.326031 4820 status_manager.go:851] "Failed to get status for pod" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" pod="openshift-marketplace/redhat-operators-pl5wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pl5wr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.326201 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:14 crc kubenswrapper[4820]: E0203 12:09:14.336145 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ntpgz" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" Feb 03 12:09:14 crc kubenswrapper[4820]: E0203 12:09:14.336254 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dt8ch" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.336462 4820 scope.go:117] "RemoveContainer" containerID="09641bd2ac341aeecdba4211412070d2ee33f42eb670fe75ab88b9a58931c289" Feb 03 12:09:14 crc kubenswrapper[4820]: E0203 12:09:14.336646 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bl6zg" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" Feb 03 12:09:14 crc kubenswrapper[4820]: E0203 12:09:14.347445 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="7s" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.468329 4820 scope.go:117] "RemoveContainer" containerID="74f9a4c19a0ffe2d3fc0a77b57873721a29f0a1f70d578ba2f68bcceb68ccce2" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.483142 4820 scope.go:117] "RemoveContainer" containerID="48852428653b274f524077be71e20c4271b365d821b8f3235df2d2b41a9e8af8" Feb 03 12:09:14 crc kubenswrapper[4820]: I0203 12:09:14.508702 4820 scope.go:117] "RemoveContainer" containerID="4b34fc810cf94c235cea364cd2441c73f56597b91b0a19b8e02498db5895a3f9" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.258847 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.330046 4820 generic.go:334] "Generic (PLEG): container finished" podID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" containerID="c8c5d6571927fff3f81a8cbd8943359663af34ac678f3d57daac16e996ac8918" exitCode=0 Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.330089 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xqrp" event={"ID":"38d510f8-dde9-46b4-965e-9d2726b5f0d7","Type":"ContainerDied","Data":"c8c5d6571927fff3f81a8cbd8943359663af34ac678f3d57daac16e996ac8918"} Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.331308 4820 status_manager.go:851] "Failed to get status for pod" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" pod="openshift-marketplace/certified-operators-dt8ch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dt8ch\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.331587 4820 status_manager.go:851] "Failed to get status for pod" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" pod="openshift-console/downloads-7954f5f757-lnc22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-lnc22\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.331676 4820 generic.go:334] "Generic (PLEG): container finished" podID="829fef9f-938d-4d61-9584-bf061063c952" containerID="4ae1535472b9e25c48a491b772148ec9aa3f2ffbfa3bf03f701721a0bdb7d923" exitCode=0 Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.331703 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vfzj" event={"ID":"829fef9f-938d-4d61-9584-bf061063c952","Type":"ContainerDied","Data":"4ae1535472b9e25c48a491b772148ec9aa3f2ffbfa3bf03f701721a0bdb7d923"} Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.332223 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.332281 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.332636 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.333474 4820 status_manager.go:851] "Failed to get status for pod" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" pod="openshift-marketplace/certified-operators-ntpgz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ntpgz\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.333744 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.334120 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.334372 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.334650 4820 status_manager.go:851] "Failed to get status for pod" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" pod="openshift-marketplace/redhat-operators-pl5wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pl5wr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.334916 4820 status_manager.go:851] "Failed to get status for pod" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-qj7xr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.335193 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.336089 4820 status_manager.go:851] "Failed to get status for pod" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" pod="openshift-marketplace/redhat-marketplace-bl6zg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bl6zg\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.336334 4820 status_manager.go:851] "Failed to get status for pod" podUID="829fef9f-938d-4d61-9584-bf061063c952" pod="openshift-marketplace/community-operators-5vfzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5vfzj\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.336649 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.336867 4820 status_manager.go:851] "Failed to get status for pod" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" pod="openshift-marketplace/redhat-marketplace-bl6zg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bl6zg\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.337125 4820 status_manager.go:851] "Failed to get status for pod" podUID="829fef9f-938d-4d61-9584-bf061063c952" pod="openshift-marketplace/community-operators-5vfzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5vfzj\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.337382 4820 status_manager.go:851] "Failed to get status for pod" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" pod="openshift-marketplace/certified-operators-dt8ch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dt8ch\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.337684 4820 status_manager.go:851] "Failed to get status for pod" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" pod="openshift-console/downloads-7954f5f757-lnc22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-lnc22\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.338167 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.338392 4820 status_manager.go:851] "Failed to get status for pod" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" pod="openshift-marketplace/certified-operators-ntpgz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ntpgz\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.338606 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.338796 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.339160 4820 status_manager.go:851] "Failed to get status for pod" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" pod="openshift-marketplace/redhat-operators-pl5wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pl5wr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.339350 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.339584 4820 status_manager.go:851] "Failed to get status for pod" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-qj7xr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:15 crc kubenswrapper[4820]: I0203 12:09:15.935176 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.141794 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.143600 4820 status_manager.go:851] "Failed to get status for pod" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" pod="openshift-marketplace/certified-operators-dt8ch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dt8ch\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.144166 4820 status_manager.go:851] "Failed to get status for pod" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" pod="openshift-console/downloads-7954f5f757-lnc22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-lnc22\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.144709 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.145071 4820 status_manager.go:851] "Failed to get status for pod" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" pod="openshift-marketplace/certified-operators-ntpgz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ntpgz\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.145373 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.145779 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.146051 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.146329 4820 status_manager.go:851] "Failed to get status for pod" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" pod="openshift-marketplace/redhat-operators-pl5wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pl5wr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.146651 4820 status_manager.go:851] "Failed to get status for pod" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-qj7xr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.147006 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.147354 4820 status_manager.go:851] "Failed to get status for pod" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" pod="openshift-marketplace/redhat-marketplace-bl6zg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bl6zg\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.147682 4820 status_manager.go:851] "Failed to get status for pod" podUID="829fef9f-938d-4d61-9584-bf061063c952" pod="openshift-marketplace/community-operators-5vfzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5vfzj\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.222541 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.222583 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:16 crc kubenswrapper[4820]: E0203 12:09:16.223323 4820 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.223843 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:16 crc kubenswrapper[4820]: W0203 12:09:16.244407 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-8841264004fca106e833eef43ffe7b6778635564f18c8582bae095eb232553d2 WatchSource:0}: Error finding container 8841264004fca106e833eef43ffe7b6778635564f18c8582bae095eb232553d2: Status 404 returned error can't find the container with id 8841264004fca106e833eef43ffe7b6778635564f18c8582bae095eb232553d2 Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.344283 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8841264004fca106e833eef43ffe7b6778635564f18c8582bae095eb232553d2"} Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.349003 4820 generic.go:334] "Generic (PLEG): container finished" podID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" containerID="69618bb83b381d85f3796db35424c0e893b61f0ed5bf0899ab50a285b23b25a1" exitCode=0 Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.349087 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl5wr" event={"ID":"2341b8b4-d207-4c89-8e46-a1b6b787afc8","Type":"ContainerDied","Data":"69618bb83b381d85f3796db35424c0e893b61f0ed5bf0899ab50a285b23b25a1"} Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.350466 4820 status_manager.go:851] "Failed to get status for pod" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" pod="openshift-console/downloads-7954f5f757-lnc22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-lnc22\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.350755 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.350999 4820 status_manager.go:851] "Failed to get status for pod" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" pod="openshift-marketplace/certified-operators-ntpgz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ntpgz\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.351270 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.351966 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.352291 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.352628 4820 status_manager.go:851] "Failed to get status for pod" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" pod="openshift-marketplace/redhat-operators-pl5wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pl5wr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.352819 4820 status_manager.go:851] "Failed to get status for pod" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-qj7xr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.353046 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.353202 4820 status_manager.go:851] "Failed to get status for pod" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" pod="openshift-marketplace/redhat-marketplace-bl6zg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bl6zg\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.353349 4820 status_manager.go:851] "Failed to get status for pod" podUID="829fef9f-938d-4d61-9584-bf061063c952" pod="openshift-marketplace/community-operators-5vfzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5vfzj\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:16 crc kubenswrapper[4820]: I0203 12:09:16.353503 4820 status_manager.go:851] "Failed to get status for pod" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" pod="openshift-marketplace/certified-operators-dt8ch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dt8ch\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.357037 4820 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="b46388578e73a431a306f4c7961efd09dabcee11bf25be1e1633e3e53b406c74" exitCode=0 Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.357137 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"b46388578e73a431a306f4c7961efd09dabcee11bf25be1e1633e3e53b406c74"} Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.357332 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.357356 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:17 crc kubenswrapper[4820]: E0203 12:09:17.357720 4820 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.357747 4820 status_manager.go:851] "Failed to get status for pod" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" pod="openshift-marketplace/certified-operators-dt8ch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dt8ch\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.357982 4820 status_manager.go:851] "Failed to get status for pod" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" pod="openshift-console/downloads-7954f5f757-lnc22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-lnc22\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.358273 4820 status_manager.go:851] "Failed to get status for pod" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" pod="openshift-marketplace/redhat-operators-zrlrv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrlrv\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.358627 4820 status_manager.go:851] "Failed to get status for pod" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" pod="openshift-marketplace/certified-operators-ntpgz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ntpgz\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.359125 4820 status_manager.go:851] "Failed to get status for pod" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.359393 4820 status_manager.go:851] "Failed to get status for pod" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" pod="openshift-marketplace/redhat-operators-pl5wr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pl5wr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.359676 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.360013 4820 status_manager.go:851] "Failed to get status for pod" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" pod="openshift-marketplace/redhat-marketplace-dvpt2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dvpt2\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.360352 4820 status_manager.go:851] "Failed to get status for pod" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-qj7xr\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.360670 4820 status_manager.go:851] "Failed to get status for pod" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" pod="openshift-marketplace/community-operators-5xqrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5xqrp\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.360981 4820 status_manager.go:851] "Failed to get status for pod" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" pod="openshift-marketplace/redhat-marketplace-bl6zg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bl6zg\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:17 crc kubenswrapper[4820]: I0203 12:09:17.361190 4820 status_manager.go:851] "Failed to get status for pod" podUID="829fef9f-938d-4d61-9584-bf061063c952" pod="openshift-marketplace/community-operators-5vfzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5vfzj\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 03 12:09:17 crc kubenswrapper[4820]: E0203 12:09:17.753497 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-qj7xr.1890bb0a01019d8e\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{machine-config-daemon-qj7xr.1890bb0a01019d8e openshift-machine-config-operator 26704 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-qj7xr,UID:2c02def6-29f2-448e-80ec-0c8ee290f053,APIVersion:v1,ResourceVersion:26697,FieldPath:spec.containers{machine-config-daemon},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 12:05:05 +0000 UTC,LastTimestamp:2026-02-03 12:08:59.59306906 +0000 UTC m=+257.116144924,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 12:09:18 crc kubenswrapper[4820]: I0203 12:09:18.372512 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5c70834072e48d9eecec072193abd6b3f05f0816ead4c8b71e860edfc863abc7"} Feb 03 12:09:19 crc kubenswrapper[4820]: I0203 12:09:19.697561 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:09:19 crc kubenswrapper[4820]: I0203 12:09:19.707333 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:09:22 crc kubenswrapper[4820]: I0203 12:09:22.818545 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"29489ef3068c699354e49ec2037bcfcdcd2d4bf0f8df595bdb815d627fa7f85d"} Feb 03 12:09:22 crc kubenswrapper[4820]: I0203 12:09:22.823173 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xqrp" event={"ID":"38d510f8-dde9-46b4-965e-9d2726b5f0d7","Type":"ContainerStarted","Data":"0732f8fdf7726b60c3240c92179891bbfb723153fcfd43a82cfa0903ecd438cd"} Feb 03 12:09:23 crc kubenswrapper[4820]: I0203 12:09:23.622555 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:23 crc kubenswrapper[4820]: I0203 12:09:23.622630 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:23 crc kubenswrapper[4820]: I0203 12:09:23.622763 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:23 crc kubenswrapper[4820]: I0203 12:09:23.622817 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:26 crc kubenswrapper[4820]: I0203 12:09:26.570713 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:09:31 crc kubenswrapper[4820]: I0203 12:09:31.382008 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7fccc9b8ad58d9bfe8bb6a46bb99ca1556e24556fba4ac50edceabb7e0da1e1e"} Feb 03 12:09:31 crc kubenswrapper[4820]: I0203 12:09:31.387312 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl5wr" event={"ID":"2341b8b4-d207-4c89-8e46-a1b6b787afc8","Type":"ContainerStarted","Data":"2ec41e34503e4bfda2198a5f685ad5041bf0ce52d86e197bfddb451d4eed0af9"} Feb 03 12:09:31 crc kubenswrapper[4820]: I0203 12:09:31.391314 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vfzj" event={"ID":"829fef9f-938d-4d61-9584-bf061063c952","Type":"ContainerStarted","Data":"6082b020c5d798741abb1c8e79f0e32b6622898883ade6085aa745b9601b3b45"} Feb 03 12:09:31 crc kubenswrapper[4820]: I0203 12:09:31.806918 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:09:31 crc kubenswrapper[4820]: I0203 12:09:31.807209 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:09:32 crc kubenswrapper[4820]: I0203 12:09:32.400437 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1bebfb297f34808f629046536334c43ce7145858a59a4e479ef086d52112a859"} Feb 03 12:09:33 crc kubenswrapper[4820]: I0203 12:09:33.506278 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5xqrp" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" containerName="registry-server" probeResult="failure" output=< Feb 03 12:09:33 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:09:33 crc kubenswrapper[4820]: > Feb 03 12:09:33 crc kubenswrapper[4820]: I0203 12:09:33.557156 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:33 crc kubenswrapper[4820]: I0203 12:09:33.557822 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:33 crc kubenswrapper[4820]: I0203 12:09:33.557156 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:33 crc kubenswrapper[4820]: I0203 12:09:33.557979 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:36 crc kubenswrapper[4820]: I0203 12:09:36.223261 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:09:36 crc kubenswrapper[4820]: I0203 12:09:36.223634 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:09:36 crc kubenswrapper[4820]: I0203 12:09:36.658727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrlrv" event={"ID":"030d5842-d0b7-4e4f-ad63-58848630a1ca","Type":"ContainerStarted","Data":"effafa3bb8851cb0fcb76799b62176931f6658f87961f4c27d50530cb7486ee7"} Feb 03 12:09:36 crc kubenswrapper[4820]: I0203 12:09:36.663809 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"508bd4284cf19550ca1c736c5775f94be3d71b5479a3e57c44dc9d187116067f"} Feb 03 12:09:36 crc kubenswrapper[4820]: I0203 12:09:36.664090 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:36 crc kubenswrapper[4820]: I0203 12:09:36.664110 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:36 crc kubenswrapper[4820]: I0203 12:09:36.664271 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:36 crc kubenswrapper[4820]: I0203 12:09:36.668119 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dt8ch" event={"ID":"682f83dc-ba7f-474f-89d2-6effbcf2806b","Type":"ContainerStarted","Data":"e8fbfe00d2ccdfdefdb859087592fbc79365fb5c290acc49e933b2513079ee01"} Feb 03 12:09:36 crc kubenswrapper[4820]: I0203 12:09:36.674848 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpt2" event={"ID":"6fdd485f-526a-4367-ba6d-b68246ed45a0","Type":"ContainerStarted","Data":"08e06c7932f94ab3ab1e5b0ff1ab752e934934e65f4e865fc4e0f662dc6117b1"} Feb 03 12:09:36 crc kubenswrapper[4820]: I0203 12:09:36.686692 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpgz" event={"ID":"f2daa931-03c0-484d-9ea2-a30607c5f034","Type":"ContainerStarted","Data":"032b9bf0325e916458f372bb1f0f3f746cb0809ac855b0d988f5977599b90be7"} Feb 03 12:09:36 crc kubenswrapper[4820]: I0203 12:09:36.688673 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bl6zg" event={"ID":"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5","Type":"ContainerStarted","Data":"a0766260437e3d831ddb87f05105ecbe2c5b49fdd749003e315d19288e803a38"} Feb 03 12:09:36 crc kubenswrapper[4820]: I0203 12:09:36.700116 4820 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:37 crc kubenswrapper[4820]: I0203 12:09:37.011814 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e98803e8-b647-4f4c-93aa-fde8d0b37435" Feb 03 12:09:37 crc kubenswrapper[4820]: I0203 12:09:37.675458 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pl5wr" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" containerName="registry-server" probeResult="failure" output=< Feb 03 12:09:37 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:09:37 crc kubenswrapper[4820]: > Feb 03 12:09:37 crc kubenswrapper[4820]: I0203 12:09:37.783361 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:37 crc kubenswrapper[4820]: I0203 12:09:37.783406 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:41 crc kubenswrapper[4820]: I0203 12:09:41.268230 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:09:41 crc kubenswrapper[4820]: I0203 12:09:41.268661 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:09:41 crc kubenswrapper[4820]: I0203 12:09:41.268673 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:41 crc kubenswrapper[4820]: I0203 12:09:41.268685 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:41 crc kubenswrapper[4820]: I0203 12:09:41.269077 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:41 crc kubenswrapper[4820]: I0203 12:09:41.269093 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:41 crc kubenswrapper[4820]: I0203 12:09:41.269537 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:41 crc kubenswrapper[4820]: I0203 12:09:41.295078 4820 generic.go:334] "Generic (PLEG): container finished" podID="6fdd485f-526a-4367-ba6d-b68246ed45a0" containerID="08e06c7932f94ab3ab1e5b0ff1ab752e934934e65f4e865fc4e0f662dc6117b1" exitCode=0 Feb 03 12:09:41 crc kubenswrapper[4820]: I0203 12:09:41.295145 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpt2" event={"ID":"6fdd485f-526a-4367-ba6d-b68246ed45a0","Type":"ContainerDied","Data":"08e06c7932f94ab3ab1e5b0ff1ab752e934934e65f4e865fc4e0f662dc6117b1"} Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.050032 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.051948 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.289148 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.302044 4820 generic.go:334] "Generic (PLEG): container finished" podID="682f83dc-ba7f-474f-89d2-6effbcf2806b" containerID="e8fbfe00d2ccdfdefdb859087592fbc79365fb5c290acc49e933b2513079ee01" exitCode=0 Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.302127 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dt8ch" event={"ID":"682f83dc-ba7f-474f-89d2-6effbcf2806b","Type":"ContainerDied","Data":"e8fbfe00d2ccdfdefdb859087592fbc79365fb5c290acc49e933b2513079ee01"} Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.306811 4820 generic.go:334] "Generic (PLEG): container finished" podID="f2daa931-03c0-484d-9ea2-a30607c5f034" containerID="032b9bf0325e916458f372bb1f0f3f746cb0809ac855b0d988f5977599b90be7" exitCode=0 Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.306969 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpgz" event={"ID":"f2daa931-03c0-484d-9ea2-a30607c5f034","Type":"ContainerDied","Data":"032b9bf0325e916458f372bb1f0f3f746cb0809ac855b0d988f5977599b90be7"} Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.311835 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bl6zg" event={"ID":"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5","Type":"ContainerDied","Data":"a0766260437e3d831ddb87f05105ecbe2c5b49fdd749003e315d19288e803a38"} Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.311796 4820 generic.go:334] "Generic (PLEG): container finished" podID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" containerID="a0766260437e3d831ddb87f05105ecbe2c5b49fdd749003e315d19288e803a38" exitCode=0 Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.313441 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.313465 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.317689 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.365194 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e98803e8-b647-4f4c-93aa-fde8d0b37435" Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.371994 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:09:42 crc kubenswrapper[4820]: I0203 12:09:42.950473 4820 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 03 12:09:43 crc kubenswrapper[4820]: I0203 12:09:43.332290 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:43 crc kubenswrapper[4820]: I0203 12:09:43.332588 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:43 crc kubenswrapper[4820]: I0203 12:09:43.554654 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:43 crc kubenswrapper[4820]: I0203 12:09:43.554791 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:43 crc kubenswrapper[4820]: I0203 12:09:43.555389 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:43 crc kubenswrapper[4820]: I0203 12:09:43.555636 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:43 crc kubenswrapper[4820]: I0203 12:09:43.556205 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:09:43 crc kubenswrapper[4820]: I0203 12:09:43.557159 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:43 crc kubenswrapper[4820]: I0203 12:09:43.557219 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:43 crc kubenswrapper[4820]: I0203 12:09:43.557802 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"56aef8c90b0a1ab28039fb85f961579b2cf433b0c4e7f3fee00ce087be14448e"} pod="openshift-console/downloads-7954f5f757-lnc22" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 03 12:09:43 crc kubenswrapper[4820]: I0203 12:09:43.558091 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" containerID="cri-o://56aef8c90b0a1ab28039fb85f961579b2cf433b0c4e7f3fee00ce087be14448e" gracePeriod=2 Feb 03 12:09:44 crc kubenswrapper[4820]: I0203 12:09:44.345477 4820 generic.go:334] "Generic (PLEG): container finished" podID="876c5dc3-b775-45cc-94b6-4339735e9975" containerID="56aef8c90b0a1ab28039fb85f961579b2cf433b0c4e7f3fee00ce087be14448e" exitCode=0 Feb 03 12:09:44 crc kubenswrapper[4820]: I0203 12:09:44.345525 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lnc22" event={"ID":"876c5dc3-b775-45cc-94b6-4339735e9975","Type":"ContainerDied","Data":"56aef8c90b0a1ab28039fb85f961579b2cf433b0c4e7f3fee00ce087be14448e"} Feb 03 12:09:44 crc kubenswrapper[4820]: I0203 12:09:44.345557 4820 scope.go:117] "RemoveContainer" containerID="0421ee1506a1a903427646b265d2645490c9bf4f584f7bfabeb5c2a9c107061b" Feb 03 12:09:45 crc kubenswrapper[4820]: I0203 12:09:45.356259 4820 generic.go:334] "Generic (PLEG): container finished" podID="030d5842-d0b7-4e4f-ad63-58848630a1ca" containerID="effafa3bb8851cb0fcb76799b62176931f6658f87961f4c27d50530cb7486ee7" exitCode=0 Feb 03 12:09:45 crc kubenswrapper[4820]: I0203 12:09:45.356570 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrlrv" event={"ID":"030d5842-d0b7-4e4f-ad63-58848630a1ca","Type":"ContainerDied","Data":"effafa3bb8851cb0fcb76799b62176931f6658f87961f4c27d50530cb7486ee7"} Feb 03 12:09:46 crc kubenswrapper[4820]: I0203 12:09:46.229383 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:46 crc kubenswrapper[4820]: I0203 12:09:46.230379 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:46 crc kubenswrapper[4820]: I0203 12:09:46.230679 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6d140e30-6304-49be-a1a3-2d6b23f9aef3" Feb 03 12:09:46 crc kubenswrapper[4820]: I0203 12:09:46.362985 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lnc22" event={"ID":"876c5dc3-b775-45cc-94b6-4339735e9975","Type":"ContainerStarted","Data":"dd898d17a538730e2cbb68e350ac8b3216294c7ad01ab42718b1660a024fe87f"} Feb 03 12:09:46 crc kubenswrapper[4820]: I0203 12:09:46.363636 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:09:46 crc kubenswrapper[4820]: I0203 12:09:46.363526 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:46 crc kubenswrapper[4820]: I0203 12:09:46.363875 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:46 crc kubenswrapper[4820]: I0203 12:09:46.365363 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpt2" event={"ID":"6fdd485f-526a-4367-ba6d-b68246ed45a0","Type":"ContainerStarted","Data":"f5f00dc439199ae0966e48a82dd93983c914990d5c9d5fc70ddd207e282b1aa9"} Feb 03 12:09:47 crc kubenswrapper[4820]: I0203 12:09:47.337269 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pl5wr" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" containerName="registry-server" probeResult="failure" output=< Feb 03 12:09:47 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:09:47 crc kubenswrapper[4820]: > Feb 03 12:09:47 crc kubenswrapper[4820]: I0203 12:09:47.377087 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bl6zg" event={"ID":"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5","Type":"ContainerStarted","Data":"9f417572a56b2fbb043f515ec88bc97aee5b8687fb92182b04068795cb737d03"} Feb 03 12:09:47 crc kubenswrapper[4820]: I0203 12:09:47.377558 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:47 crc kubenswrapper[4820]: I0203 12:09:47.377612 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:48 crc kubenswrapper[4820]: I0203 12:09:48.383195 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpgz" event={"ID":"f2daa931-03c0-484d-9ea2-a30607c5f034","Type":"ContainerStarted","Data":"f5a067401a3ecc8dfcc7db0c46f16d23ff5d49eb882b82123e9b6adbe2ebcc12"} Feb 03 12:09:48 crc kubenswrapper[4820]: I0203 12:09:48.385032 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dt8ch" event={"ID":"682f83dc-ba7f-474f-89d2-6effbcf2806b","Type":"ContainerStarted","Data":"bcdc8760bc1bc68380a3300773bc16d7c49530853d2ccff845beffc0b30b51ff"} Feb 03 12:09:50 crc kubenswrapper[4820]: I0203 12:09:50.741846 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 03 12:09:50 crc kubenswrapper[4820]: I0203 12:09:50.780285 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrlrv" event={"ID":"030d5842-d0b7-4e4f-ad63-58848630a1ca","Type":"ContainerStarted","Data":"46bffd8733841c34dde692c1bb14efc701beac022c68cd732fbfcf87846086e0"} Feb 03 12:09:51 crc kubenswrapper[4820]: I0203 12:09:51.357143 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:09:51 crc kubenswrapper[4820]: I0203 12:09:51.357190 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:09:51 crc kubenswrapper[4820]: I0203 12:09:51.877241 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:09:51 crc kubenswrapper[4820]: I0203 12:09:51.877313 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:09:51 crc kubenswrapper[4820]: I0203 12:09:51.877333 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:09:51 crc kubenswrapper[4820]: I0203 12:09:51.877353 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:09:51 crc kubenswrapper[4820]: I0203 12:09:51.907298 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:09:51 crc kubenswrapper[4820]: I0203 12:09:51.907641 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:09:51 crc kubenswrapper[4820]: I0203 12:09:51.972802 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:09:52 crc kubenswrapper[4820]: I0203 12:09:52.195152 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:09:52 crc kubenswrapper[4820]: I0203 12:09:52.197314 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 03 12:09:52 crc kubenswrapper[4820]: I0203 12:09:52.402337 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dt8ch" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" containerName="registry-server" probeResult="failure" output=< Feb 03 12:09:52 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:09:52 crc kubenswrapper[4820]: > Feb 03 12:09:52 crc kubenswrapper[4820]: I0203 12:09:52.735376 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 03 12:09:53 crc kubenswrapper[4820]: I0203 12:09:53.115528 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-bl6zg" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" containerName="registry-server" probeResult="failure" output=< Feb 03 12:09:53 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:09:53 crc kubenswrapper[4820]: > Feb 03 12:09:53 crc kubenswrapper[4820]: I0203 12:09:53.124853 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ntpgz" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" containerName="registry-server" probeResult="failure" output=< Feb 03 12:09:53 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:09:53 crc kubenswrapper[4820]: > Feb 03 12:09:53 crc kubenswrapper[4820]: I0203 12:09:53.171127 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e98803e8-b647-4f4c-93aa-fde8d0b37435" Feb 03 12:09:53 crc kubenswrapper[4820]: I0203 12:09:53.704291 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:09:53 crc kubenswrapper[4820]: I0203 12:09:53.704369 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:09:53 crc kubenswrapper[4820]: I0203 12:09:53.706581 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:53 crc kubenswrapper[4820]: I0203 12:09:53.706643 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:53 crc kubenswrapper[4820]: I0203 12:09:53.706776 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 12:09:53 crc kubenswrapper[4820]: I0203 12:09:53.706813 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:09:53 crc kubenswrapper[4820]: I0203 12:09:53.706860 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:09:54 crc kubenswrapper[4820]: I0203 12:09:54.719960 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 03 12:09:54 crc kubenswrapper[4820]: I0203 12:09:54.722952 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 03 12:09:55 crc kubenswrapper[4820]: I0203 12:09:55.098871 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 12:09:55 crc kubenswrapper[4820]: I0203 12:09:55.099136 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 03 12:09:55 crc kubenswrapper[4820]: I0203 12:09:55.106119 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 03 12:09:55 crc kubenswrapper[4820]: I0203 12:09:55.106248 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zrlrv" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" containerName="registry-server" probeResult="failure" output=< Feb 03 12:09:55 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:09:55 crc kubenswrapper[4820]: > Feb 03 12:09:55 crc kubenswrapper[4820]: I0203 12:09:55.106387 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 03 12:09:55 crc kubenswrapper[4820]: I0203 12:09:55.110951 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 03 12:09:55 crc kubenswrapper[4820]: I0203 12:09:55.790608 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 12:09:56 crc kubenswrapper[4820]: I0203 12:09:56.044569 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 03 12:09:56 crc kubenswrapper[4820]: I0203 12:09:56.686009 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 03 12:09:56 crc kubenswrapper[4820]: I0203 12:09:56.686229 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 03 12:09:56 crc kubenswrapper[4820]: I0203 12:09:56.686327 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 12:09:56 crc kubenswrapper[4820]: I0203 12:09:56.689103 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 03 12:09:56 crc kubenswrapper[4820]: I0203 12:09:56.690614 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 03 12:09:56 crc kubenswrapper[4820]: I0203 12:09:56.692312 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 03 12:09:56 crc kubenswrapper[4820]: I0203 12:09:56.725345 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 03 12:09:56 crc kubenswrapper[4820]: I0203 12:09:56.759754 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:09:56 crc kubenswrapper[4820]: I0203 12:09:56.813075 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:09:57 crc kubenswrapper[4820]: I0203 12:09:56.935682 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 12:09:57 crc kubenswrapper[4820]: I0203 12:09:56.994535 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 03 12:09:57 crc kubenswrapper[4820]: I0203 12:09:57.013543 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 03 12:09:57 crc kubenswrapper[4820]: I0203 12:09:57.371341 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 03 12:09:57 crc kubenswrapper[4820]: I0203 12:09:57.372643 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 03 12:09:57 crc kubenswrapper[4820]: I0203 12:09:57.400367 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 03 12:09:57 crc kubenswrapper[4820]: I0203 12:09:57.468120 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 03 12:09:57 crc kubenswrapper[4820]: I0203 12:09:57.770395 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 03 12:09:57 crc kubenswrapper[4820]: I0203 12:09:57.810632 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 03 12:09:58 crc kubenswrapper[4820]: I0203 12:09:58.153366 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 03 12:09:58 crc kubenswrapper[4820]: I0203 12:09:58.176070 4820 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 12:09:58 crc kubenswrapper[4820]: I0203 12:09:58.633343 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 03 12:09:58 crc kubenswrapper[4820]: I0203 12:09:58.920680 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 03 12:09:59 crc kubenswrapper[4820]: I0203 12:09:59.091115 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 03 12:09:59 crc kubenswrapper[4820]: I0203 12:09:59.140785 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 03 12:09:59 crc kubenswrapper[4820]: I0203 12:09:59.209852 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 03 12:09:59 crc kubenswrapper[4820]: I0203 12:09:59.308252 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 03 12:09:59 crc kubenswrapper[4820]: I0203 12:09:59.444157 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 03 12:09:59 crc kubenswrapper[4820]: I0203 12:09:59.567429 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 03 12:09:59 crc kubenswrapper[4820]: I0203 12:09:59.936414 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 03 12:09:59 crc kubenswrapper[4820]: I0203 12:09:59.936761 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 03 12:09:59 crc kubenswrapper[4820]: I0203 12:09:59.936943 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 12:09:59 crc kubenswrapper[4820]: I0203 12:09:59.937115 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 03 12:09:59 crc kubenswrapper[4820]: I0203 12:09:59.941420 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 03 12:09:59 crc kubenswrapper[4820]: I0203 12:09:59.968765 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 12:10:00 crc kubenswrapper[4820]: I0203 12:10:00.776401 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 12:10:00 crc kubenswrapper[4820]: I0203 12:10:00.778006 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 03 12:10:00 crc kubenswrapper[4820]: I0203 12:10:00.791928 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 03 12:10:00 crc kubenswrapper[4820]: I0203 12:10:00.857061 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 03 12:10:00 crc kubenswrapper[4820]: I0203 12:10:00.981333 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 03 12:10:00 crc kubenswrapper[4820]: I0203 12:10:00.986286 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.045971 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.046049 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.045984 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.046114 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.046288 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.046311 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.046504 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.046563 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.046595 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.046700 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.046792 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.064251 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.069334 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.079578 4820 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.125503 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.127846 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.131830 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.148048 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.171432 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.176092 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.176791 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.204784 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.421678 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.461453 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.469546 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.498783 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.953587 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.956454 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.960509 4820 generic.go:334] "Generic (PLEG): container finished" podID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerID="67b885316ec9f5e784fc1adc076ae1f874aad7366377cb7270df56b6acafe0e1" exitCode=0 Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.960551 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" event={"ID":"92dde085-8a2b-4c9f-947f-441ea67b8622","Type":"ContainerDied","Data":"67b885316ec9f5e784fc1adc076ae1f874aad7366377cb7270df56b6acafe0e1"} Feb 03 12:10:02 crc kubenswrapper[4820]: I0203 12:10:02.961672 4820 scope.go:117] "RemoveContainer" containerID="67b885316ec9f5e784fc1adc076ae1f874aad7366377cb7270df56b6acafe0e1" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.495605 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.496272 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.496533 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.497354 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.497502 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.530639 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.554237 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.554289 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.554406 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.554296 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.601138 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.642529 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.660942 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.951099 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.951410 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.951589 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.967769 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" event={"ID":"92dde085-8a2b-4c9f-947f-441ea67b8622","Type":"ContainerStarted","Data":"280f023759f9ef7ae8dddf1f214830aff16da4836086e4cee77b773efd3b347b"} Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.969601 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.969729 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9w662 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 03 12:10:03 crc kubenswrapper[4820]: I0203 12:10:03.969768 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.019655 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.056019 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.815872 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9w662 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.815959 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.816293 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9w662 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.816366 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.826448 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.827157 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.836645 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.842629 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.842789 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.842919 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.852230 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zrlrv" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" containerName="registry-server" probeResult="failure" output=< Feb 03 12:10:04 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:10:04 crc kubenswrapper[4820]: > Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.853899 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.863634 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 03 12:10:04 crc kubenswrapper[4820]: I0203 12:10:04.938846 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 03 12:10:05 crc kubenswrapper[4820]: I0203 12:10:05.134245 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 03 12:10:05 crc kubenswrapper[4820]: I0203 12:10:05.138527 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9w662 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 03 12:10:05 crc kubenswrapper[4820]: I0203 12:10:05.138580 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 03 12:10:05 crc kubenswrapper[4820]: I0203 12:10:05.186250 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 03 12:10:05 crc kubenswrapper[4820]: I0203 12:10:05.509065 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 12:10:05 crc kubenswrapper[4820]: I0203 12:10:05.558963 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 03 12:10:05 crc kubenswrapper[4820]: I0203 12:10:05.625190 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 03 12:10:05 crc kubenswrapper[4820]: I0203 12:10:05.848704 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 03 12:10:06 crc kubenswrapper[4820]: I0203 12:10:06.013433 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 03 12:10:06 crc kubenswrapper[4820]: I0203 12:10:06.170378 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 03 12:10:06 crc kubenswrapper[4820]: I0203 12:10:06.262937 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 03 12:10:06 crc kubenswrapper[4820]: I0203 12:10:06.264144 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 03 12:10:06 crc kubenswrapper[4820]: I0203 12:10:06.336187 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 03 12:10:06 crc kubenswrapper[4820]: I0203 12:10:06.451320 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 03 12:10:06 crc kubenswrapper[4820]: I0203 12:10:06.545427 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 03 12:10:06 crc kubenswrapper[4820]: I0203 12:10:06.548348 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 12:10:06 crc kubenswrapper[4820]: I0203 12:10:06.841848 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 12:10:06 crc kubenswrapper[4820]: I0203 12:10:06.954005 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 03 12:10:07 crc kubenswrapper[4820]: I0203 12:10:07.809397 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 12:10:07 crc kubenswrapper[4820]: I0203 12:10:07.869170 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 03 12:10:07 crc kubenswrapper[4820]: I0203 12:10:07.872483 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 03 12:10:07 crc kubenswrapper[4820]: I0203 12:10:07.927651 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 03 12:10:08 crc kubenswrapper[4820]: I0203 12:10:08.183148 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 03 12:10:08 crc kubenswrapper[4820]: I0203 12:10:08.215145 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 03 12:10:08 crc kubenswrapper[4820]: I0203 12:10:08.327706 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 03 12:10:08 crc kubenswrapper[4820]: I0203 12:10:08.973732 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 03 12:10:08 crc kubenswrapper[4820]: I0203 12:10:08.976202 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 03 12:10:08 crc kubenswrapper[4820]: I0203 12:10:08.998800 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 03 12:10:08 crc kubenswrapper[4820]: I0203 12:10:08.998828 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 03 12:10:08 crc kubenswrapper[4820]: I0203 12:10:08.999081 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 03 12:10:08 crc kubenswrapper[4820]: I0203 12:10:08.999249 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 03 12:10:09 crc kubenswrapper[4820]: I0203 12:10:09.120028 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 03 12:10:09 crc kubenswrapper[4820]: I0203 12:10:09.133143 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 03 12:10:09 crc kubenswrapper[4820]: I0203 12:10:09.154664 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 03 12:10:09 crc kubenswrapper[4820]: I0203 12:10:09.405343 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.295507 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.307334 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.307526 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.323887 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.326064 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.326108 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.330494 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.331180 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.341963 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.344444 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.344732 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.345021 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.349025 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.439517 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.641337 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.685832 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.747138 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.748539 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.775679 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.785934 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 03 12:10:11 crc kubenswrapper[4820]: I0203 12:10:11.885099 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 12:10:12 crc kubenswrapper[4820]: I0203 12:10:12.967425 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 03 12:10:12 crc kubenswrapper[4820]: I0203 12:10:12.968519 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 03 12:10:12 crc kubenswrapper[4820]: I0203 12:10:12.969027 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 03 12:10:12 crc kubenswrapper[4820]: I0203 12:10:12.969070 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 03 12:10:12 crc kubenswrapper[4820]: I0203 12:10:12.977577 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 12:10:12 crc kubenswrapper[4820]: I0203 12:10:12.978125 4820 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 12:10:12 crc kubenswrapper[4820]: I0203 12:10:12.978439 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 03 12:10:12 crc kubenswrapper[4820]: I0203 12:10:12.978821 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 12:10:12 crc kubenswrapper[4820]: I0203 12:10:12.979172 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 03 12:10:12 crc kubenswrapper[4820]: I0203 12:10:12.994812 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.038313 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.103236 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.183364 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.316386 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.334507 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.409786 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.554167 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.554228 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.554272 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.554349 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.554434 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.554870 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"dd898d17a538730e2cbb68e350ac8b3216294c7ad01ab42718b1660a024fe87f"} pod="openshift-console/downloads-7954f5f757-lnc22" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.554929 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" containerID="cri-o://dd898d17a538730e2cbb68e350ac8b3216294c7ad01ab42718b1660a024fe87f" gracePeriod=2 Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.555012 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.555080 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.607857 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.609463 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.769318 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.769603 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.769756 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.774569 4820 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.794924 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 03 12:10:13 crc kubenswrapper[4820]: E0203 12:10:13.796081 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"download-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=download-server pod=downloads-7954f5f757-lnc22_openshift-console(876c5dc3-b775-45cc-94b6-4339735e9975)\"" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.818253 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.861516 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.914153 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.975278 4820 generic.go:334] "Generic (PLEG): container finished" podID="876c5dc3-b775-45cc-94b6-4339735e9975" containerID="dd898d17a538730e2cbb68e350ac8b3216294c7ad01ab42718b1660a024fe87f" exitCode=0 Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.975343 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lnc22" event={"ID":"876c5dc3-b775-45cc-94b6-4339735e9975","Type":"ContainerDied","Data":"dd898d17a538730e2cbb68e350ac8b3216294c7ad01ab42718b1660a024fe87f"} Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.975435 4820 scope.go:117] "RemoveContainer" containerID="56aef8c90b0a1ab28039fb85f961579b2cf433b0c4e7f3fee00ce087be14448e" Feb 03 12:10:13 crc kubenswrapper[4820]: I0203 12:10:13.976060 4820 scope.go:117] "RemoveContainer" containerID="dd898d17a538730e2cbb68e350ac8b3216294c7ad01ab42718b1660a024fe87f" Feb 03 12:10:13 crc kubenswrapper[4820]: E0203 12:10:13.976342 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"download-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=download-server pod=downloads-7954f5f757-lnc22_openshift-console(876c5dc3-b775-45cc-94b6-4339735e9975)\"" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" Feb 03 12:10:14 crc kubenswrapper[4820]: I0203 12:10:14.003936 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 03 12:10:14 crc kubenswrapper[4820]: I0203 12:10:14.124501 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 03 12:10:14 crc kubenswrapper[4820]: I0203 12:10:14.233268 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 03 12:10:14 crc kubenswrapper[4820]: I0203 12:10:14.332491 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 03 12:10:14 crc kubenswrapper[4820]: I0203 12:10:14.363397 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 03 12:10:14 crc kubenswrapper[4820]: I0203 12:10:14.421386 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 03 12:10:14 crc kubenswrapper[4820]: I0203 12:10:14.537237 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 12:10:14 crc kubenswrapper[4820]: I0203 12:10:14.558422 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 03 12:10:14 crc kubenswrapper[4820]: I0203 12:10:14.639514 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 03 12:10:14 crc kubenswrapper[4820]: I0203 12:10:14.727898 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:10:14 crc kubenswrapper[4820]: I0203 12:10:14.827195 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 03 12:10:14 crc kubenswrapper[4820]: I0203 12:10:14.835478 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 03 12:10:14 crc kubenswrapper[4820]: I0203 12:10:14.851408 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 03 12:10:15 crc kubenswrapper[4820]: I0203 12:10:15.012796 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 03 12:10:15 crc kubenswrapper[4820]: I0203 12:10:15.033712 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 03 12:10:15 crc kubenswrapper[4820]: I0203 12:10:15.604965 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 03 12:10:15 crc kubenswrapper[4820]: I0203 12:10:15.631543 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 03 12:10:15 crc kubenswrapper[4820]: I0203 12:10:15.792092 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 03 12:10:15 crc kubenswrapper[4820]: I0203 12:10:15.802156 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 03 12:10:15 crc kubenswrapper[4820]: I0203 12:10:15.832789 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 12:10:15 crc kubenswrapper[4820]: I0203 12:10:15.879604 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 12:10:15 crc kubenswrapper[4820]: I0203 12:10:15.879823 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 03 12:10:15 crc kubenswrapper[4820]: I0203 12:10:15.900703 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 03 12:10:16 crc kubenswrapper[4820]: I0203 12:10:16.016317 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 03 12:10:16 crc kubenswrapper[4820]: I0203 12:10:16.089513 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 03 12:10:16 crc kubenswrapper[4820]: I0203 12:10:16.361368 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 03 12:10:16 crc kubenswrapper[4820]: I0203 12:10:16.388746 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 03 12:10:16 crc kubenswrapper[4820]: I0203 12:10:16.389441 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 03 12:10:16 crc kubenswrapper[4820]: I0203 12:10:16.463788 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 03 12:10:16 crc kubenswrapper[4820]: I0203 12:10:16.518301 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 12:10:16 crc kubenswrapper[4820]: I0203 12:10:16.650300 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 03 12:10:16 crc kubenswrapper[4820]: I0203 12:10:16.836034 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.137950 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.236372 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.322854 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.381282 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.571854 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.619659 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.637357 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.702602 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.770185 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.819985 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.939136 4820 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.940442 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dt8ch" podStartSLOduration=45.019736638 podStartE2EDuration="2m50.940424769s" podCreationTimestamp="2026-02-03 12:07:27 +0000 UTC" firstStartedPulling="2026-02-03 12:07:42.08415309 +0000 UTC m=+179.607228954" lastFinishedPulling="2026-02-03 12:09:48.004841221 +0000 UTC m=+305.527917085" observedRunningTime="2026-02-03 12:09:48.528528795 +0000 UTC m=+306.051604669" watchObservedRunningTime="2026-02-03 12:10:17.940424769 +0000 UTC m=+335.463500633" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.942008 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zrlrv" podStartSLOduration=41.68746386 podStartE2EDuration="2m46.94199815s" podCreationTimestamp="2026-02-03 12:07:31 +0000 UTC" firstStartedPulling="2026-02-03 12:07:43.121542541 +0000 UTC m=+180.644618405" lastFinishedPulling="2026-02-03 12:09:48.376076831 +0000 UTC m=+305.899152695" observedRunningTime="2026-02-03 12:09:50.819750315 +0000 UTC m=+308.342826189" watchObservedRunningTime="2026-02-03 12:10:17.94199815 +0000 UTC m=+335.465074014" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.944616 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ntpgz" podStartSLOduration=44.328358994 podStartE2EDuration="2m49.944596717s" podCreationTimestamp="2026-02-03 12:07:28 +0000 UTC" firstStartedPulling="2026-02-03 12:07:42.086450664 +0000 UTC m=+179.609526528" lastFinishedPulling="2026-02-03 12:09:47.702688387 +0000 UTC m=+305.225764251" observedRunningTime="2026-02-03 12:09:48.403283737 +0000 UTC m=+305.926359611" watchObservedRunningTime="2026-02-03 12:10:17.944596717 +0000 UTC m=+335.467672581" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.944768 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dvpt2" podStartSLOduration=45.86784818 podStartE2EDuration="2m48.944760744s" podCreationTimestamp="2026-02-03 12:07:29 +0000 UTC" firstStartedPulling="2026-02-03 12:07:42.09126666 +0000 UTC m=+179.614342524" lastFinishedPulling="2026-02-03 12:09:45.168179224 +0000 UTC m=+302.691255088" observedRunningTime="2026-02-03 12:09:46.40705416 +0000 UTC m=+303.930130024" watchObservedRunningTime="2026-02-03 12:10:17.944760744 +0000 UTC m=+335.467836608" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.945311 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5xqrp" podStartSLOduration=72.689956655 podStartE2EDuration="2m50.945305188s" podCreationTimestamp="2026-02-03 12:07:27 +0000 UTC" firstStartedPulling="2026-02-03 12:07:42.058037868 +0000 UTC m=+179.581113732" lastFinishedPulling="2026-02-03 12:09:20.313386411 +0000 UTC m=+277.836462265" observedRunningTime="2026-02-03 12:09:30.283925274 +0000 UTC m=+287.807001148" watchObservedRunningTime="2026-02-03 12:10:17.945305188 +0000 UTC m=+335.468381052" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.945574 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bl6zg" podStartSLOduration=43.736293727 podStartE2EDuration="2m47.945569501s" podCreationTimestamp="2026-02-03 12:07:30 +0000 UTC" firstStartedPulling="2026-02-03 12:07:42.075076024 +0000 UTC m=+179.598151888" lastFinishedPulling="2026-02-03 12:09:46.284351798 +0000 UTC m=+303.807427662" observedRunningTime="2026-02-03 12:09:48.500295564 +0000 UTC m=+306.023371448" watchObservedRunningTime="2026-02-03 12:10:17.945569501 +0000 UTC m=+335.468645365" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.946311 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5vfzj" podStartSLOduration=66.112501908 podStartE2EDuration="2m50.946306363s" podCreationTimestamp="2026-02-03 12:07:27 +0000 UTC" firstStartedPulling="2026-02-03 12:07:42.072119644 +0000 UTC m=+179.595195508" lastFinishedPulling="2026-02-03 12:09:26.905924099 +0000 UTC m=+284.428999963" observedRunningTime="2026-02-03 12:09:31.44025926 +0000 UTC m=+288.963335134" watchObservedRunningTime="2026-02-03 12:10:17.946306363 +0000 UTC m=+335.469382227" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.946435 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pl5wr" podStartSLOduration=63.283616191 podStartE2EDuration="2m46.946425919s" podCreationTimestamp="2026-02-03 12:07:31 +0000 UTC" firstStartedPulling="2026-02-03 12:07:43.133030204 +0000 UTC m=+180.656106068" lastFinishedPulling="2026-02-03 12:09:26.795839932 +0000 UTC m=+284.318915796" observedRunningTime="2026-02-03 12:09:31.418675164 +0000 UTC m=+288.941751048" watchObservedRunningTime="2026-02-03 12:10:17.946425919 +0000 UTC m=+335.469501783" Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.949182 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.949274 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 12:10:17 crc kubenswrapper[4820]: I0203 12:10:17.982572 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=41.982537431 podStartE2EDuration="41.982537431s" podCreationTimestamp="2026-02-03 12:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:10:17.977626261 +0000 UTC m=+335.500702155" watchObservedRunningTime="2026-02-03 12:10:17.982537431 +0000 UTC m=+335.505613295" Feb 03 12:10:18 crc kubenswrapper[4820]: I0203 12:10:18.049741 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 03 12:10:18 crc kubenswrapper[4820]: I0203 12:10:18.058404 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 03 12:10:18 crc kubenswrapper[4820]: I0203 12:10:18.150038 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 03 12:10:18 crc kubenswrapper[4820]: I0203 12:10:18.203630 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 03 12:10:18 crc kubenswrapper[4820]: I0203 12:10:18.240093 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 03 12:10:18 crc kubenswrapper[4820]: I0203 12:10:18.448543 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 03 12:10:18 crc kubenswrapper[4820]: I0203 12:10:18.459271 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 03 12:10:18 crc kubenswrapper[4820]: I0203 12:10:18.529806 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 03 12:10:18 crc kubenswrapper[4820]: I0203 12:10:18.838050 4820 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 03 12:10:19 crc kubenswrapper[4820]: I0203 12:10:19.651329 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 03 12:10:19 crc kubenswrapper[4820]: I0203 12:10:19.652194 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 03 12:10:19 crc kubenswrapper[4820]: I0203 12:10:19.659262 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 03 12:10:19 crc kubenswrapper[4820]: I0203 12:10:19.665173 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 03 12:10:19 crc kubenswrapper[4820]: I0203 12:10:19.665493 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 03 12:10:19 crc kubenswrapper[4820]: I0203 12:10:19.670734 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 03 12:10:19 crc kubenswrapper[4820]: I0203 12:10:19.672213 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 03 12:10:19 crc kubenswrapper[4820]: I0203 12:10:19.677367 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 12:10:19 crc kubenswrapper[4820]: I0203 12:10:19.687302 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 03 12:10:19 crc kubenswrapper[4820]: I0203 12:10:19.790524 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 03 12:10:19 crc kubenswrapper[4820]: I0203 12:10:19.984394 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 03 12:10:20 crc kubenswrapper[4820]: I0203 12:10:20.249219 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 03 12:10:20 crc kubenswrapper[4820]: I0203 12:10:20.627216 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 03 12:10:20 crc kubenswrapper[4820]: I0203 12:10:20.654660 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 03 12:10:21 crc kubenswrapper[4820]: I0203 12:10:21.157661 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 12:10:21 crc kubenswrapper[4820]: I0203 12:10:21.383406 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 03 12:10:21 crc kubenswrapper[4820]: I0203 12:10:21.766910 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 12:10:22 crc kubenswrapper[4820]: I0203 12:10:22.018159 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 03 12:10:22 crc kubenswrapper[4820]: I0203 12:10:22.108237 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 12:10:23 crc kubenswrapper[4820]: I0203 12:10:23.346596 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 03 12:10:23 crc kubenswrapper[4820]: I0203 12:10:23.347051 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 03 12:10:23 crc kubenswrapper[4820]: I0203 12:10:23.376588 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=2.37656925 podStartE2EDuration="2.37656925s" podCreationTimestamp="2026-02-03 12:10:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:10:23.376194363 +0000 UTC m=+340.899270237" watchObservedRunningTime="2026-02-03 12:10:23.37656925 +0000 UTC m=+340.899645104" Feb 03 12:10:23 crc kubenswrapper[4820]: I0203 12:10:23.410003 4820 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 12:10:23 crc kubenswrapper[4820]: I0203 12:10:23.410570 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://f1804402597e3d17ee8eb64d90ed036a33a9dc26a6a1a7b3b14474fbae6bf1a8" gracePeriod=5 Feb 03 12:10:23 crc kubenswrapper[4820]: I0203 12:10:23.629429 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 12:10:24 crc kubenswrapper[4820]: I0203 12:10:24.142940 4820 scope.go:117] "RemoveContainer" containerID="dd898d17a538730e2cbb68e350ac8b3216294c7ad01ab42718b1660a024fe87f" Feb 03 12:10:24 crc kubenswrapper[4820]: E0203 12:10:24.143226 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"download-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=download-server pod=downloads-7954f5f757-lnc22_openshift-console(876c5dc3-b775-45cc-94b6-4339735e9975)\"" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" Feb 03 12:10:24 crc kubenswrapper[4820]: I0203 12:10:24.316452 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 12:10:24 crc kubenswrapper[4820]: I0203 12:10:24.570555 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 03 12:10:26 crc kubenswrapper[4820]: I0203 12:10:26.009049 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 03 12:10:28 crc kubenswrapper[4820]: I0203 12:10:28.887947 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 12:10:28 crc kubenswrapper[4820]: I0203 12:10:28.888338 4820 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="f1804402597e3d17ee8eb64d90ed036a33a9dc26a6a1a7b3b14474fbae6bf1a8" exitCode=137 Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.036471 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.036546 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.150380 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.152118 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.152237 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.152262 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.152288 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.152306 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.152352 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.152380 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.152434 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.152626 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.153024 4820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.153048 4820 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.153059 4820 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.153068 4820 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.161486 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.187106 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.187142 4820 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="55fc5599-22a7-4d06-bfea-aa1f9754f306" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.187167 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.187178 4820 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="55fc5599-22a7-4d06-bfea-aa1f9754f306" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.254725 4820 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.902859 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.903076 4820 scope.go:117] "RemoveContainer" containerID="f1804402597e3d17ee8eb64d90ed036a33a9dc26a6a1a7b3b14474fbae6bf1a8" Feb 03 12:10:29 crc kubenswrapper[4820]: I0203 12:10:29.903390 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:10:31 crc kubenswrapper[4820]: I0203 12:10:31.151472 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 03 12:10:35 crc kubenswrapper[4820]: I0203 12:10:35.142748 4820 scope.go:117] "RemoveContainer" containerID="dd898d17a538730e2cbb68e350ac8b3216294c7ad01ab42718b1660a024fe87f" Feb 03 12:10:35 crc kubenswrapper[4820]: E0203 12:10:35.143354 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"download-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=download-server pod=downloads-7954f5f757-lnc22_openshift-console(876c5dc3-b775-45cc-94b6-4339735e9975)\"" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" Feb 03 12:10:36 crc kubenswrapper[4820]: I0203 12:10:36.711381 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-645448985d-vdjc6"] Feb 03 12:10:36 crc kubenswrapper[4820]: I0203 12:10:36.712308 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" podUID="5a7735ae-474f-4c7e-8d71-bb6f3e06ab05" containerName="controller-manager" containerID="cri-o://84a8cfea8877c064fe516848d18a880005f2324d29b6ce26da7f90ed55b78bdd" gracePeriod=30 Feb 03 12:10:36 crc kubenswrapper[4820]: I0203 12:10:36.810454 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn"] Feb 03 12:10:36 crc kubenswrapper[4820]: I0203 12:10:36.810703 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" podUID="21113ff0-f43a-4138-9bbe-485e6e54d9a9" containerName="route-controller-manager" containerID="cri-o://2cb33ae2ed073b4048d6eac76b01ca311717471bc247854867f53b9eeba0892a" gracePeriod=30 Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.059552 4820 generic.go:334] "Generic (PLEG): container finished" podID="21113ff0-f43a-4138-9bbe-485e6e54d9a9" containerID="2cb33ae2ed073b4048d6eac76b01ca311717471bc247854867f53b9eeba0892a" exitCode=0 Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.059650 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" event={"ID":"21113ff0-f43a-4138-9bbe-485e6e54d9a9","Type":"ContainerDied","Data":"2cb33ae2ed073b4048d6eac76b01ca311717471bc247854867f53b9eeba0892a"} Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.066096 4820 generic.go:334] "Generic (PLEG): container finished" podID="5a7735ae-474f-4c7e-8d71-bb6f3e06ab05" containerID="84a8cfea8877c064fe516848d18a880005f2324d29b6ce26da7f90ed55b78bdd" exitCode=0 Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.066151 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" event={"ID":"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05","Type":"ContainerDied","Data":"84a8cfea8877c064fe516848d18a880005f2324d29b6ce26da7f90ed55b78bdd"} Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.066183 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" event={"ID":"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05","Type":"ContainerDied","Data":"a47eda5cbea503786bb31b47c9cb16c2185fd7f5d7a61210de2dd8b260182d00"} Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.066196 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a47eda5cbea503786bb31b47c9cb16c2185fd7f5d7a61210de2dd8b260182d00" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.069765 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.115366 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvb46\" (UniqueName: \"kubernetes.io/projected/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-kube-api-access-vvb46\") pod \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.115422 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-client-ca\") pod \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.115472 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-serving-cert\") pod \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.115500 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-proxy-ca-bundles\") pod \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.115544 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-config\") pod \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\" (UID: \"5a7735ae-474f-4c7e-8d71-bb6f3e06ab05\") " Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.116413 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-config" (OuterVolumeSpecName: "config") pod "5a7735ae-474f-4c7e-8d71-bb6f3e06ab05" (UID: "5a7735ae-474f-4c7e-8d71-bb6f3e06ab05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.116750 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5a7735ae-474f-4c7e-8d71-bb6f3e06ab05" (UID: "5a7735ae-474f-4c7e-8d71-bb6f3e06ab05"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.116967 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-client-ca" (OuterVolumeSpecName: "client-ca") pod "5a7735ae-474f-4c7e-8d71-bb6f3e06ab05" (UID: "5a7735ae-474f-4c7e-8d71-bb6f3e06ab05"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.121864 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5a7735ae-474f-4c7e-8d71-bb6f3e06ab05" (UID: "5a7735ae-474f-4c7e-8d71-bb6f3e06ab05"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.121997 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-kube-api-access-vvb46" (OuterVolumeSpecName: "kube-api-access-vvb46") pod "5a7735ae-474f-4c7e-8d71-bb6f3e06ab05" (UID: "5a7735ae-474f-4c7e-8d71-bb6f3e06ab05"). InnerVolumeSpecName "kube-api-access-vvb46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.219056 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.219117 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.219141 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.219155 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvb46\" (UniqueName: \"kubernetes.io/projected/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-kube-api-access-vvb46\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.219172 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.338375 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.470544 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21113ff0-f43a-4138-9bbe-485e6e54d9a9-config\") pod \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.470604 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g49nk\" (UniqueName: \"kubernetes.io/projected/21113ff0-f43a-4138-9bbe-485e6e54d9a9-kube-api-access-g49nk\") pod \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.470646 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21113ff0-f43a-4138-9bbe-485e6e54d9a9-serving-cert\") pod \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.470743 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21113ff0-f43a-4138-9bbe-485e6e54d9a9-client-ca\") pod \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\" (UID: \"21113ff0-f43a-4138-9bbe-485e6e54d9a9\") " Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.471554 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21113ff0-f43a-4138-9bbe-485e6e54d9a9-config" (OuterVolumeSpecName: "config") pod "21113ff0-f43a-4138-9bbe-485e6e54d9a9" (UID: "21113ff0-f43a-4138-9bbe-485e6e54d9a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.471604 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21113ff0-f43a-4138-9bbe-485e6e54d9a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "21113ff0-f43a-4138-9bbe-485e6e54d9a9" (UID: "21113ff0-f43a-4138-9bbe-485e6e54d9a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.474640 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21113ff0-f43a-4138-9bbe-485e6e54d9a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "21113ff0-f43a-4138-9bbe-485e6e54d9a9" (UID: "21113ff0-f43a-4138-9bbe-485e6e54d9a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.474772 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21113ff0-f43a-4138-9bbe-485e6e54d9a9-kube-api-access-g49nk" (OuterVolumeSpecName: "kube-api-access-g49nk") pod "21113ff0-f43a-4138-9bbe-485e6e54d9a9" (UID: "21113ff0-f43a-4138-9bbe-485e6e54d9a9"). InnerVolumeSpecName "kube-api-access-g49nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.571828 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21113ff0-f43a-4138-9bbe-485e6e54d9a9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.571876 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21113ff0-f43a-4138-9bbe-485e6e54d9a9-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.571902 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g49nk\" (UniqueName: \"kubernetes.io/projected/21113ff0-f43a-4138-9bbe-485e6e54d9a9-kube-api-access-g49nk\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:37 crc kubenswrapper[4820]: I0203 12:10:37.571915 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21113ff0-f43a-4138-9bbe-485e6e54d9a9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.073114 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-645448985d-vdjc6" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.073171 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.073114 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn" event={"ID":"21113ff0-f43a-4138-9bbe-485e6e54d9a9","Type":"ContainerDied","Data":"769d352f60201cebf6792c31b1b16320ca39e1b555891cb114cdc0ca1b78a1bb"} Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.073255 4820 scope.go:117] "RemoveContainer" containerID="2cb33ae2ed073b4048d6eac76b01ca311717471bc247854867f53b9eeba0892a" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.092976 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-645448985d-vdjc6"] Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.098570 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-645448985d-vdjc6"] Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.107380 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn"] Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.110343 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-557d47bcf4-ztmcn"] Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.706534 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v"] Feb 03 12:10:38 crc kubenswrapper[4820]: E0203 12:10:38.707459 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" containerName="installer" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.707586 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" containerName="installer" Feb 03 12:10:38 crc kubenswrapper[4820]: E0203 12:10:38.707673 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a7735ae-474f-4c7e-8d71-bb6f3e06ab05" containerName="controller-manager" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.707766 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7735ae-474f-4c7e-8d71-bb6f3e06ab05" containerName="controller-manager" Feb 03 12:10:38 crc kubenswrapper[4820]: E0203 12:10:38.707849 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.707941 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 12:10:38 crc kubenswrapper[4820]: E0203 12:10:38.708064 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21113ff0-f43a-4138-9bbe-485e6e54d9a9" containerName="route-controller-manager" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.708141 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="21113ff0-f43a-4138-9bbe-485e6e54d9a9" containerName="route-controller-manager" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.708351 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.708438 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f648e21-019c-4ed2-a381-77f0166c5ecc" containerName="installer" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.708540 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a7735ae-474f-4c7e-8d71-bb6f3e06ab05" containerName="controller-manager" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.708628 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="21113ff0-f43a-4138-9bbe-485e6e54d9a9" containerName="route-controller-manager" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.709172 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.711134 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54c7cdf5ff-22665"] Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.712546 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.712684 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.712858 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.712573 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.712616 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.713022 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.719200 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.719222 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.719281 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.719223 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.720604 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.720620 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.723403 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54c7cdf5ff-22665"] Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.724596 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.724750 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.726472 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v"] Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.885573 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-serving-cert\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.885623 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3eeafbf-8728-42aa-9d35-6db4a3556524-serving-cert\") pod \"route-controller-manager-ff9dcc5bb-dw69v\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.885645 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3eeafbf-8728-42aa-9d35-6db4a3556524-client-ca\") pod \"route-controller-manager-ff9dcc5bb-dw69v\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.885673 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmdv7\" (UniqueName: \"kubernetes.io/projected/d3eeafbf-8728-42aa-9d35-6db4a3556524-kube-api-access-bmdv7\") pod \"route-controller-manager-ff9dcc5bb-dw69v\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.885695 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw6g5\" (UniqueName: \"kubernetes.io/projected/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-kube-api-access-jw6g5\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.885769 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-config\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.885799 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3eeafbf-8728-42aa-9d35-6db4a3556524-config\") pod \"route-controller-manager-ff9dcc5bb-dw69v\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.885815 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-client-ca\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.885864 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-proxy-ca-bundles\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.986753 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-proxy-ca-bundles\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.986845 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-serving-cert\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.986880 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3eeafbf-8728-42aa-9d35-6db4a3556524-serving-cert\") pod \"route-controller-manager-ff9dcc5bb-dw69v\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.986924 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3eeafbf-8728-42aa-9d35-6db4a3556524-client-ca\") pod \"route-controller-manager-ff9dcc5bb-dw69v\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.986958 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmdv7\" (UniqueName: \"kubernetes.io/projected/d3eeafbf-8728-42aa-9d35-6db4a3556524-kube-api-access-bmdv7\") pod \"route-controller-manager-ff9dcc5bb-dw69v\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.986989 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw6g5\" (UniqueName: \"kubernetes.io/projected/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-kube-api-access-jw6g5\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.987036 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-config\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.987060 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3eeafbf-8728-42aa-9d35-6db4a3556524-config\") pod \"route-controller-manager-ff9dcc5bb-dw69v\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.987082 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-client-ca\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.988132 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3eeafbf-8728-42aa-9d35-6db4a3556524-client-ca\") pod \"route-controller-manager-ff9dcc5bb-dw69v\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.988227 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-client-ca\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.988723 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3eeafbf-8728-42aa-9d35-6db4a3556524-config\") pod \"route-controller-manager-ff9dcc5bb-dw69v\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.988859 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-proxy-ca-bundles\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.989077 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-config\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.991616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-serving-cert\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:38 crc kubenswrapper[4820]: I0203 12:10:38.992666 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3eeafbf-8728-42aa-9d35-6db4a3556524-serving-cert\") pod \"route-controller-manager-ff9dcc5bb-dw69v\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:39 crc kubenswrapper[4820]: I0203 12:10:39.005541 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmdv7\" (UniqueName: \"kubernetes.io/projected/d3eeafbf-8728-42aa-9d35-6db4a3556524-kube-api-access-bmdv7\") pod \"route-controller-manager-ff9dcc5bb-dw69v\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:39 crc kubenswrapper[4820]: I0203 12:10:39.005961 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw6g5\" (UniqueName: \"kubernetes.io/projected/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-kube-api-access-jw6g5\") pod \"controller-manager-54c7cdf5ff-22665\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:39 crc kubenswrapper[4820]: I0203 12:10:39.033994 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:39 crc kubenswrapper[4820]: I0203 12:10:39.039221 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:39 crc kubenswrapper[4820]: I0203 12:10:39.155199 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21113ff0-f43a-4138-9bbe-485e6e54d9a9" path="/var/lib/kubelet/pods/21113ff0-f43a-4138-9bbe-485e6e54d9a9/volumes" Feb 03 12:10:39 crc kubenswrapper[4820]: I0203 12:10:39.156464 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a7735ae-474f-4c7e-8d71-bb6f3e06ab05" path="/var/lib/kubelet/pods/5a7735ae-474f-4c7e-8d71-bb6f3e06ab05/volumes" Feb 03 12:10:39 crc kubenswrapper[4820]: I0203 12:10:39.236553 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54c7cdf5ff-22665"] Feb 03 12:10:39 crc kubenswrapper[4820]: I0203 12:10:39.290116 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v"] Feb 03 12:10:39 crc kubenswrapper[4820]: W0203 12:10:39.296578 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3eeafbf_8728_42aa_9d35_6db4a3556524.slice/crio-54fa124180ec8a1e0246523e5f354f904835abf2ac67ea10c9503568083e1a70 WatchSource:0}: Error finding container 54fa124180ec8a1e0246523e5f354f904835abf2ac67ea10c9503568083e1a70: Status 404 returned error can't find the container with id 54fa124180ec8a1e0246523e5f354f904835abf2ac67ea10c9503568083e1a70 Feb 03 12:10:39 crc kubenswrapper[4820]: I0203 12:10:39.881933 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 03 12:10:40 crc kubenswrapper[4820]: I0203 12:10:40.088790 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" event={"ID":"d3eeafbf-8728-42aa-9d35-6db4a3556524","Type":"ContainerStarted","Data":"0f85efa606b4a013e33077a77790a972776a7a7b0e53a6001b6a205c44237b48"} Feb 03 12:10:40 crc kubenswrapper[4820]: I0203 12:10:40.088844 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" event={"ID":"d3eeafbf-8728-42aa-9d35-6db4a3556524","Type":"ContainerStarted","Data":"54fa124180ec8a1e0246523e5f354f904835abf2ac67ea10c9503568083e1a70"} Feb 03 12:10:40 crc kubenswrapper[4820]: I0203 12:10:40.089109 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:40 crc kubenswrapper[4820]: I0203 12:10:40.091436 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" event={"ID":"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40","Type":"ContainerStarted","Data":"cfb63909b3bbc2b11abc5faac309cff44319dc60178489ce3890e2c36cd01062"} Feb 03 12:10:40 crc kubenswrapper[4820]: I0203 12:10:40.091483 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" event={"ID":"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40","Type":"ContainerStarted","Data":"15182ca9744096ccbf9921cfd767ed331f43ba6c4c205365f2654cfccd6e7eae"} Feb 03 12:10:40 crc kubenswrapper[4820]: I0203 12:10:40.091632 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:40 crc kubenswrapper[4820]: I0203 12:10:40.094927 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:40 crc kubenswrapper[4820]: I0203 12:10:40.096238 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:40 crc kubenswrapper[4820]: I0203 12:10:40.109455 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" podStartSLOduration=4.109440555 podStartE2EDuration="4.109440555s" podCreationTimestamp="2026-02-03 12:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:10:40.106338906 +0000 UTC m=+357.629414780" watchObservedRunningTime="2026-02-03 12:10:40.109440555 +0000 UTC m=+357.632516409" Feb 03 12:10:40 crc kubenswrapper[4820]: I0203 12:10:40.143104 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" podStartSLOduration=4.143088706 podStartE2EDuration="4.143088706s" podCreationTimestamp="2026-02-03 12:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:10:40.140377974 +0000 UTC m=+357.663453838" watchObservedRunningTime="2026-02-03 12:10:40.143088706 +0000 UTC m=+357.666164570" Feb 03 12:10:44 crc kubenswrapper[4820]: I0203 12:10:44.511151 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 03 12:10:48 crc kubenswrapper[4820]: I0203 12:10:48.143430 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 03 12:10:48 crc kubenswrapper[4820]: I0203 12:10:48.147906 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 03 12:10:49 crc kubenswrapper[4820]: I0203 12:10:49.259007 4820 scope.go:117] "RemoveContainer" containerID="dd898d17a538730e2cbb68e350ac8b3216294c7ad01ab42718b1660a024fe87f" Feb 03 12:10:49 crc kubenswrapper[4820]: E0203 12:10:49.259439 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"download-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=download-server pod=downloads-7954f5f757-lnc22_openshift-console(876c5dc3-b775-45cc-94b6-4339735e9975)\"" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" Feb 03 12:10:49 crc kubenswrapper[4820]: I0203 12:10:49.569400 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 03 12:10:49 crc kubenswrapper[4820]: I0203 12:10:49.625680 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54c7cdf5ff-22665"] Feb 03 12:10:49 crc kubenswrapper[4820]: I0203 12:10:49.626012 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" podUID="0a5433a0-94fe-4da3-ad26-8c1d8d92ba40" containerName="controller-manager" containerID="cri-o://cfb63909b3bbc2b11abc5faac309cff44319dc60178489ce3890e2c36cd01062" gracePeriod=30 Feb 03 12:10:49 crc kubenswrapper[4820]: I0203 12:10:49.646144 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v"] Feb 03 12:10:49 crc kubenswrapper[4820]: I0203 12:10:49.646438 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" podUID="d3eeafbf-8728-42aa-9d35-6db4a3556524" containerName="route-controller-manager" containerID="cri-o://0f85efa606b4a013e33077a77790a972776a7a7b0e53a6001b6a205c44237b48" gracePeriod=30 Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.375967 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.377238 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.377353 4820 generic.go:334] "Generic (PLEG): container finished" podID="d3eeafbf-8728-42aa-9d35-6db4a3556524" containerID="0f85efa606b4a013e33077a77790a972776a7a7b0e53a6001b6a205c44237b48" exitCode=0 Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.377391 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" event={"ID":"d3eeafbf-8728-42aa-9d35-6db4a3556524","Type":"ContainerDied","Data":"0f85efa606b4a013e33077a77790a972776a7a7b0e53a6001b6a205c44237b48"} Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.377414 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" event={"ID":"d3eeafbf-8728-42aa-9d35-6db4a3556524","Type":"ContainerDied","Data":"54fa124180ec8a1e0246523e5f354f904835abf2ac67ea10c9503568083e1a70"} Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.377429 4820 scope.go:117] "RemoveContainer" containerID="0f85efa606b4a013e33077a77790a972776a7a7b0e53a6001b6a205c44237b48" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.379230 4820 generic.go:334] "Generic (PLEG): container finished" podID="0a5433a0-94fe-4da3-ad26-8c1d8d92ba40" containerID="cfb63909b3bbc2b11abc5faac309cff44319dc60178489ce3890e2c36cd01062" exitCode=0 Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.379267 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" event={"ID":"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40","Type":"ContainerDied","Data":"cfb63909b3bbc2b11abc5faac309cff44319dc60178489ce3890e2c36cd01062"} Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.379286 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" event={"ID":"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40","Type":"ContainerDied","Data":"15182ca9744096ccbf9921cfd767ed331f43ba6c4c205365f2654cfccd6e7eae"} Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.379323 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c7cdf5ff-22665" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.403421 4820 scope.go:117] "RemoveContainer" containerID="0f85efa606b4a013e33077a77790a972776a7a7b0e53a6001b6a205c44237b48" Feb 03 12:10:50 crc kubenswrapper[4820]: E0203 12:10:50.403801 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f85efa606b4a013e33077a77790a972776a7a7b0e53a6001b6a205c44237b48\": container with ID starting with 0f85efa606b4a013e33077a77790a972776a7a7b0e53a6001b6a205c44237b48 not found: ID does not exist" containerID="0f85efa606b4a013e33077a77790a972776a7a7b0e53a6001b6a205c44237b48" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.403837 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f85efa606b4a013e33077a77790a972776a7a7b0e53a6001b6a205c44237b48"} err="failed to get container status \"0f85efa606b4a013e33077a77790a972776a7a7b0e53a6001b6a205c44237b48\": rpc error: code = NotFound desc = could not find container \"0f85efa606b4a013e33077a77790a972776a7a7b0e53a6001b6a205c44237b48\": container with ID starting with 0f85efa606b4a013e33077a77790a972776a7a7b0e53a6001b6a205c44237b48 not found: ID does not exist" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.403862 4820 scope.go:117] "RemoveContainer" containerID="cfb63909b3bbc2b11abc5faac309cff44319dc60178489ce3890e2c36cd01062" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.424593 4820 scope.go:117] "RemoveContainer" containerID="cfb63909b3bbc2b11abc5faac309cff44319dc60178489ce3890e2c36cd01062" Feb 03 12:10:50 crc kubenswrapper[4820]: E0203 12:10:50.425909 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfb63909b3bbc2b11abc5faac309cff44319dc60178489ce3890e2c36cd01062\": container with ID starting with cfb63909b3bbc2b11abc5faac309cff44319dc60178489ce3890e2c36cd01062 not found: ID does not exist" containerID="cfb63909b3bbc2b11abc5faac309cff44319dc60178489ce3890e2c36cd01062" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.425964 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfb63909b3bbc2b11abc5faac309cff44319dc60178489ce3890e2c36cd01062"} err="failed to get container status \"cfb63909b3bbc2b11abc5faac309cff44319dc60178489ce3890e2c36cd01062\": rpc error: code = NotFound desc = could not find container \"cfb63909b3bbc2b11abc5faac309cff44319dc60178489ce3890e2c36cd01062\": container with ID starting with cfb63909b3bbc2b11abc5faac309cff44319dc60178489ce3890e2c36cd01062 not found: ID does not exist" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.464725 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw6g5\" (UniqueName: \"kubernetes.io/projected/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-kube-api-access-jw6g5\") pod \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.464789 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-serving-cert\") pod \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.464851 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3eeafbf-8728-42aa-9d35-6db4a3556524-config\") pod \"d3eeafbf-8728-42aa-9d35-6db4a3556524\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.464868 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-proxy-ca-bundles\") pod \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.464901 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-config\") pod \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.464935 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3eeafbf-8728-42aa-9d35-6db4a3556524-serving-cert\") pod \"d3eeafbf-8728-42aa-9d35-6db4a3556524\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.464951 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmdv7\" (UniqueName: \"kubernetes.io/projected/d3eeafbf-8728-42aa-9d35-6db4a3556524-kube-api-access-bmdv7\") pod \"d3eeafbf-8728-42aa-9d35-6db4a3556524\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.465011 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3eeafbf-8728-42aa-9d35-6db4a3556524-client-ca\") pod \"d3eeafbf-8728-42aa-9d35-6db4a3556524\" (UID: \"d3eeafbf-8728-42aa-9d35-6db4a3556524\") " Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.465031 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-client-ca\") pod \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\" (UID: \"0a5433a0-94fe-4da3-ad26-8c1d8d92ba40\") " Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.465550 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-client-ca" (OuterVolumeSpecName: "client-ca") pod "0a5433a0-94fe-4da3-ad26-8c1d8d92ba40" (UID: "0a5433a0-94fe-4da3-ad26-8c1d8d92ba40"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.465713 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-config" (OuterVolumeSpecName: "config") pod "0a5433a0-94fe-4da3-ad26-8c1d8d92ba40" (UID: "0a5433a0-94fe-4da3-ad26-8c1d8d92ba40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.466225 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3eeafbf-8728-42aa-9d35-6db4a3556524-client-ca" (OuterVolumeSpecName: "client-ca") pod "d3eeafbf-8728-42aa-9d35-6db4a3556524" (UID: "d3eeafbf-8728-42aa-9d35-6db4a3556524"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.466237 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3eeafbf-8728-42aa-9d35-6db4a3556524-config" (OuterVolumeSpecName: "config") pod "d3eeafbf-8728-42aa-9d35-6db4a3556524" (UID: "d3eeafbf-8728-42aa-9d35-6db4a3556524"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.466498 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0a5433a0-94fe-4da3-ad26-8c1d8d92ba40" (UID: "0a5433a0-94fe-4da3-ad26-8c1d8d92ba40"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.471660 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3eeafbf-8728-42aa-9d35-6db4a3556524-kube-api-access-bmdv7" (OuterVolumeSpecName: "kube-api-access-bmdv7") pod "d3eeafbf-8728-42aa-9d35-6db4a3556524" (UID: "d3eeafbf-8728-42aa-9d35-6db4a3556524"). InnerVolumeSpecName "kube-api-access-bmdv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.472144 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0a5433a0-94fe-4da3-ad26-8c1d8d92ba40" (UID: "0a5433a0-94fe-4da3-ad26-8c1d8d92ba40"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.475579 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3eeafbf-8728-42aa-9d35-6db4a3556524-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d3eeafbf-8728-42aa-9d35-6db4a3556524" (UID: "d3eeafbf-8728-42aa-9d35-6db4a3556524"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.478795 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-kube-api-access-jw6g5" (OuterVolumeSpecName: "kube-api-access-jw6g5") pod "0a5433a0-94fe-4da3-ad26-8c1d8d92ba40" (UID: "0a5433a0-94fe-4da3-ad26-8c1d8d92ba40"). InnerVolumeSpecName "kube-api-access-jw6g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.565630 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3eeafbf-8728-42aa-9d35-6db4a3556524-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.565661 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.565673 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.565681 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3eeafbf-8728-42aa-9d35-6db4a3556524-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.565693 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmdv7\" (UniqueName: \"kubernetes.io/projected/d3eeafbf-8728-42aa-9d35-6db4a3556524-kube-api-access-bmdv7\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.565701 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3eeafbf-8728-42aa-9d35-6db4a3556524-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.565710 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.565719 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw6g5\" (UniqueName: \"kubernetes.io/projected/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-kube-api-access-jw6g5\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.565726 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.710699 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54c7cdf5ff-22665"] Feb 03 12:10:50 crc kubenswrapper[4820]: I0203 12:10:50.716139 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54c7cdf5ff-22665"] Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.152037 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a5433a0-94fe-4da3-ad26-8c1d8d92ba40" path="/var/lib/kubelet/pods/0a5433a0-94fe-4da3-ad26-8c1d8d92ba40/volumes" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.227420 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg"] Feb 03 12:10:51 crc kubenswrapper[4820]: E0203 12:10:51.227696 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3eeafbf-8728-42aa-9d35-6db4a3556524" containerName="route-controller-manager" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.227709 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3eeafbf-8728-42aa-9d35-6db4a3556524" containerName="route-controller-manager" Feb 03 12:10:51 crc kubenswrapper[4820]: E0203 12:10:51.227726 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5433a0-94fe-4da3-ad26-8c1d8d92ba40" containerName="controller-manager" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.227734 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5433a0-94fe-4da3-ad26-8c1d8d92ba40" containerName="controller-manager" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.227834 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5433a0-94fe-4da3-ad26-8c1d8d92ba40" containerName="controller-manager" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.227843 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3eeafbf-8728-42aa-9d35-6db4a3556524" containerName="route-controller-manager" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.228257 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.230956 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c49665d77-7g6k9"] Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.232294 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.237634 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.237868 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.238044 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.239109 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.240284 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.240469 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.243908 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg"] Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.251393 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c49665d77-7g6k9"] Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.259218 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.279034 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-serving-cert\") pod \"route-controller-manager-68d76787c5-vfjkg\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.279087 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-proxy-ca-bundles\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.279131 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-serving-cert\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.279151 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrljk\" (UniqueName: \"kubernetes.io/projected/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-kube-api-access-jrljk\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.279178 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-client-ca\") pod \"route-controller-manager-68d76787c5-vfjkg\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.279208 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-config\") pod \"route-controller-manager-68d76787c5-vfjkg\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.284073 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw6mn\" (UniqueName: \"kubernetes.io/projected/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-kube-api-access-qw6mn\") pod \"route-controller-manager-68d76787c5-vfjkg\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.284191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-config\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.284242 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-client-ca\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.386000 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.386574 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw6mn\" (UniqueName: \"kubernetes.io/projected/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-kube-api-access-qw6mn\") pod \"route-controller-manager-68d76787c5-vfjkg\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.386635 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-config\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.386666 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-client-ca\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.386692 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-serving-cert\") pod \"route-controller-manager-68d76787c5-vfjkg\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.386712 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-proxy-ca-bundles\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.386743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-serving-cert\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.386759 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrljk\" (UniqueName: \"kubernetes.io/projected/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-kube-api-access-jrljk\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.386782 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-client-ca\") pod \"route-controller-manager-68d76787c5-vfjkg\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.386814 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-config\") pod \"route-controller-manager-68d76787c5-vfjkg\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.388173 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-config\") pod \"route-controller-manager-68d76787c5-vfjkg\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.388215 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-proxy-ca-bundles\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.388233 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-client-ca\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.388226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-config\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.389427 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-client-ca\") pod \"route-controller-manager-68d76787c5-vfjkg\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.401059 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-serving-cert\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.401216 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-serving-cert\") pod \"route-controller-manager-68d76787c5-vfjkg\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.403713 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw6mn\" (UniqueName: \"kubernetes.io/projected/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-kube-api-access-qw6mn\") pod \"route-controller-manager-68d76787c5-vfjkg\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.405795 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrljk\" (UniqueName: \"kubernetes.io/projected/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-kube-api-access-jrljk\") pod \"controller-manager-7c49665d77-7g6k9\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.443708 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v"] Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.449859 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ff9dcc5bb-dw69v"] Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.557175 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.568975 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:51 crc kubenswrapper[4820]: I0203 12:10:51.944005 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c49665d77-7g6k9"] Feb 03 12:10:52 crc kubenswrapper[4820]: I0203 12:10:52.022401 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg"] Feb 03 12:10:52 crc kubenswrapper[4820]: W0203 12:10:52.034419 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaaf0fe9_db47_4119_8b3f_70fcaf1c426a.slice/crio-b3476d44f06558f7448923df3acffdca447820f0188561e2dd48b033f7073871 WatchSource:0}: Error finding container b3476d44f06558f7448923df3acffdca447820f0188561e2dd48b033f7073871: Status 404 returned error can't find the container with id b3476d44f06558f7448923df3acffdca447820f0188561e2dd48b033f7073871 Feb 03 12:10:52 crc kubenswrapper[4820]: I0203 12:10:52.412899 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" event={"ID":"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5","Type":"ContainerStarted","Data":"9872473f98652b7643da70e365bb6953a6513bb2e8c205ff30e32a24623acb97"} Feb 03 12:10:52 crc kubenswrapper[4820]: I0203 12:10:52.413344 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" event={"ID":"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5","Type":"ContainerStarted","Data":"0655223bf63b4174f324c7a93f0338dd49f7a6788d1755d353308aea8fffc370"} Feb 03 12:10:52 crc kubenswrapper[4820]: I0203 12:10:52.415558 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" event={"ID":"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a","Type":"ContainerStarted","Data":"ac7dd837dfe095ab7689649073514e22f6c28adf1f3278b7df058de41bd110d6"} Feb 03 12:10:52 crc kubenswrapper[4820]: I0203 12:10:52.415731 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" event={"ID":"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a","Type":"ContainerStarted","Data":"b3476d44f06558f7448923df3acffdca447820f0188561e2dd48b033f7073871"} Feb 03 12:10:52 crc kubenswrapper[4820]: I0203 12:10:52.416453 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:52 crc kubenswrapper[4820]: I0203 12:10:52.448078 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" podStartSLOduration=3.448057746 podStartE2EDuration="3.448057746s" podCreationTimestamp="2026-02-03 12:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:10:52.442847672 +0000 UTC m=+369.965923546" watchObservedRunningTime="2026-02-03 12:10:52.448057746 +0000 UTC m=+369.971133610" Feb 03 12:10:52 crc kubenswrapper[4820]: I0203 12:10:52.467911 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" podStartSLOduration=3.46786389 podStartE2EDuration="3.46786389s" podCreationTimestamp="2026-02-03 12:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:10:52.46249088 +0000 UTC m=+369.985566754" watchObservedRunningTime="2026-02-03 12:10:52.46786389 +0000 UTC m=+369.990939774" Feb 03 12:10:52 crc kubenswrapper[4820]: I0203 12:10:52.681469 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:10:53 crc kubenswrapper[4820]: I0203 12:10:53.153845 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3eeafbf-8728-42aa-9d35-6db4a3556524" path="/var/lib/kubelet/pods/d3eeafbf-8728-42aa-9d35-6db4a3556524/volumes" Feb 03 12:10:53 crc kubenswrapper[4820]: I0203 12:10:53.422184 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:10:53 crc kubenswrapper[4820]: I0203 12:10:53.426679 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:11:01 crc kubenswrapper[4820]: I0203 12:11:01.143046 4820 scope.go:117] "RemoveContainer" containerID="dd898d17a538730e2cbb68e350ac8b3216294c7ad01ab42718b1660a024fe87f" Feb 03 12:11:01 crc kubenswrapper[4820]: I0203 12:11:01.962779 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lnc22" event={"ID":"876c5dc3-b775-45cc-94b6-4339735e9975","Type":"ContainerStarted","Data":"5f3e3aac7ab023889b0682efb994656a220ca42014ed693e3b0753fb38a7459f"} Feb 03 12:11:01 crc kubenswrapper[4820]: I0203 12:11:01.963393 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:11:01 crc kubenswrapper[4820]: I0203 12:11:01.965217 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:11:01 crc kubenswrapper[4820]: I0203 12:11:01.965274 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:11:02 crc kubenswrapper[4820]: I0203 12:11:02.971961 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:11:02 crc kubenswrapper[4820]: I0203 12:11:02.972039 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:11:03 crc kubenswrapper[4820]: I0203 12:11:03.554192 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:11:03 crc kubenswrapper[4820]: I0203 12:11:03.554252 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:11:03 crc kubenswrapper[4820]: I0203 12:11:03.554283 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-lnc22 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 03 12:11:03 crc kubenswrapper[4820]: I0203 12:11:03.554382 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lnc22" podUID="876c5dc3-b775-45cc-94b6-4339735e9975" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 03 12:11:09 crc kubenswrapper[4820]: I0203 12:11:09.594693 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c49665d77-7g6k9"] Feb 03 12:11:09 crc kubenswrapper[4820]: I0203 12:11:09.595738 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" podUID="d8ec3df1-d558-4411-85b6-c4fa5ce13ec5" containerName="controller-manager" containerID="cri-o://9872473f98652b7643da70e365bb6953a6513bb2e8c205ff30e32a24623acb97" gracePeriod=30 Feb 03 12:11:09 crc kubenswrapper[4820]: I0203 12:11:09.689509 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg"] Feb 03 12:11:09 crc kubenswrapper[4820]: I0203 12:11:09.689786 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" podUID="eaaf0fe9-db47-4119-8b3f-70fcaf1c426a" containerName="route-controller-manager" containerID="cri-o://ac7dd837dfe095ab7689649073514e22f6c28adf1f3278b7df058de41bd110d6" gracePeriod=30 Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.024727 4820 generic.go:334] "Generic (PLEG): container finished" podID="eaaf0fe9-db47-4119-8b3f-70fcaf1c426a" containerID="ac7dd837dfe095ab7689649073514e22f6c28adf1f3278b7df058de41bd110d6" exitCode=0 Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.024835 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" event={"ID":"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a","Type":"ContainerDied","Data":"ac7dd837dfe095ab7689649073514e22f6c28adf1f3278b7df058de41bd110d6"} Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.026796 4820 generic.go:334] "Generic (PLEG): container finished" podID="d8ec3df1-d558-4411-85b6-c4fa5ce13ec5" containerID="9872473f98652b7643da70e365bb6953a6513bb2e8c205ff30e32a24623acb97" exitCode=0 Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.026830 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" event={"ID":"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5","Type":"ContainerDied","Data":"9872473f98652b7643da70e365bb6953a6513bb2e8c205ff30e32a24623acb97"} Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.227235 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.326514 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-config\") pod \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.326596 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-serving-cert\") pod \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.326660 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw6mn\" (UniqueName: \"kubernetes.io/projected/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-kube-api-access-qw6mn\") pod \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.326727 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-client-ca\") pod \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\" (UID: \"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a\") " Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.327509 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-config" (OuterVolumeSpecName: "config") pod "eaaf0fe9-db47-4119-8b3f-70fcaf1c426a" (UID: "eaaf0fe9-db47-4119-8b3f-70fcaf1c426a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.328028 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-client-ca" (OuterVolumeSpecName: "client-ca") pod "eaaf0fe9-db47-4119-8b3f-70fcaf1c426a" (UID: "eaaf0fe9-db47-4119-8b3f-70fcaf1c426a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.328060 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.331697 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eaaf0fe9-db47-4119-8b3f-70fcaf1c426a" (UID: "eaaf0fe9-db47-4119-8b3f-70fcaf1c426a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.331799 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-kube-api-access-qw6mn" (OuterVolumeSpecName: "kube-api-access-qw6mn") pod "eaaf0fe9-db47-4119-8b3f-70fcaf1c426a" (UID: "eaaf0fe9-db47-4119-8b3f-70fcaf1c426a"). InnerVolumeSpecName "kube-api-access-qw6mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.428767 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.428807 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw6mn\" (UniqueName: \"kubernetes.io/projected/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-kube-api-access-qw6mn\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.428822 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.702834 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.731240 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrljk\" (UniqueName: \"kubernetes.io/projected/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-kube-api-access-jrljk\") pod \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.731319 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-config\") pod \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.731396 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-client-ca\") pod \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.731437 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-serving-cert\") pod \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.731472 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-proxy-ca-bundles\") pod \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\" (UID: \"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5\") " Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.732727 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d8ec3df1-d558-4411-85b6-c4fa5ce13ec5" (UID: "d8ec3df1-d558-4411-85b6-c4fa5ce13ec5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.732885 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-config" (OuterVolumeSpecName: "config") pod "d8ec3df1-d558-4411-85b6-c4fa5ce13ec5" (UID: "d8ec3df1-d558-4411-85b6-c4fa5ce13ec5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.734118 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-client-ca" (OuterVolumeSpecName: "client-ca") pod "d8ec3df1-d558-4411-85b6-c4fa5ce13ec5" (UID: "d8ec3df1-d558-4411-85b6-c4fa5ce13ec5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.736205 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d8ec3df1-d558-4411-85b6-c4fa5ce13ec5" (UID: "d8ec3df1-d558-4411-85b6-c4fa5ce13ec5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.736366 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-kube-api-access-jrljk" (OuterVolumeSpecName: "kube-api-access-jrljk") pod "d8ec3df1-d558-4411-85b6-c4fa5ce13ec5" (UID: "d8ec3df1-d558-4411-85b6-c4fa5ce13ec5"). InnerVolumeSpecName "kube-api-access-jrljk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.832445 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrljk\" (UniqueName: \"kubernetes.io/projected/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-kube-api-access-jrljk\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.832488 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.832500 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.832510 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:10 crc kubenswrapper[4820]: I0203 12:11:10.832518 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.035462 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" event={"ID":"d8ec3df1-d558-4411-85b6-c4fa5ce13ec5","Type":"ContainerDied","Data":"0655223bf63b4174f324c7a93f0338dd49f7a6788d1755d353308aea8fffc370"} Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.035520 4820 scope.go:117] "RemoveContainer" containerID="9872473f98652b7643da70e365bb6953a6513bb2e8c205ff30e32a24623acb97" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.035580 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c49665d77-7g6k9" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.037800 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" event={"ID":"eaaf0fe9-db47-4119-8b3f-70fcaf1c426a","Type":"ContainerDied","Data":"b3476d44f06558f7448923df3acffdca447820f0188561e2dd48b033f7073871"} Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.037862 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.056283 4820 scope.go:117] "RemoveContainer" containerID="ac7dd837dfe095ab7689649073514e22f6c28adf1f3278b7df058de41bd110d6" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.071349 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg"] Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.076138 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68d76787c5-vfjkg"] Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.088428 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c49665d77-7g6k9"] Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.093254 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c49665d77-7g6k9"] Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.150375 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ec3df1-d558-4411-85b6-c4fa5ce13ec5" path="/var/lib/kubelet/pods/d8ec3df1-d558-4411-85b6-c4fa5ce13ec5/volumes" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.151046 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaaf0fe9-db47-4119-8b3f-70fcaf1c426a" path="/var/lib/kubelet/pods/eaaf0fe9-db47-4119-8b3f-70fcaf1c426a/volumes" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.242271 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84df8fddf8-8fnmd"] Feb 03 12:11:11 crc kubenswrapper[4820]: E0203 12:11:11.242537 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaaf0fe9-db47-4119-8b3f-70fcaf1c426a" containerName="route-controller-manager" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.242558 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaaf0fe9-db47-4119-8b3f-70fcaf1c426a" containerName="route-controller-manager" Feb 03 12:11:11 crc kubenswrapper[4820]: E0203 12:11:11.242576 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ec3df1-d558-4411-85b6-c4fa5ce13ec5" containerName="controller-manager" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.242584 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ec3df1-d558-4411-85b6-c4fa5ce13ec5" containerName="controller-manager" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.242721 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaaf0fe9-db47-4119-8b3f-70fcaf1c426a" containerName="route-controller-manager" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.242743 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ec3df1-d558-4411-85b6-c4fa5ce13ec5" containerName="controller-manager" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.243250 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.246391 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5"] Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.247850 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.251460 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.252059 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.252346 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.252734 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.252766 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.252856 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.252960 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.253045 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.253086 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.253214 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.253352 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.253617 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.260285 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.273999 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84df8fddf8-8fnmd"] Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.293141 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5"] Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.439219 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7wq6\" (UniqueName: \"kubernetes.io/projected/4cdd56f2-da8a-4f2a-93d2-f141f003528c-kube-api-access-v7wq6\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.439547 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-serving-cert\") pod \"route-controller-manager-d6bb47b48-q6nn5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.439582 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgmlj\" (UniqueName: \"kubernetes.io/projected/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-kube-api-access-hgmlj\") pod \"route-controller-manager-d6bb47b48-q6nn5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.439632 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-proxy-ca-bundles\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.439662 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-client-ca\") pod \"route-controller-manager-d6bb47b48-q6nn5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.439687 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-config\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.439712 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-config\") pod \"route-controller-manager-d6bb47b48-q6nn5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.439737 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-client-ca\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.439763 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cdd56f2-da8a-4f2a-93d2-f141f003528c-serving-cert\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.540744 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgmlj\" (UniqueName: \"kubernetes.io/projected/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-kube-api-access-hgmlj\") pod \"route-controller-manager-d6bb47b48-q6nn5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.540855 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-proxy-ca-bundles\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.540956 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-client-ca\") pod \"route-controller-manager-d6bb47b48-q6nn5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.540981 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-config\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.541003 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-config\") pod \"route-controller-manager-d6bb47b48-q6nn5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.541026 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-client-ca\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.541054 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cdd56f2-da8a-4f2a-93d2-f141f003528c-serving-cert\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.541075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7wq6\" (UniqueName: \"kubernetes.io/projected/4cdd56f2-da8a-4f2a-93d2-f141f003528c-kube-api-access-v7wq6\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.541144 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-serving-cert\") pod \"route-controller-manager-d6bb47b48-q6nn5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.542639 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-config\") pod \"route-controller-manager-d6bb47b48-q6nn5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.542834 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-client-ca\") pod \"route-controller-manager-d6bb47b48-q6nn5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.542888 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-client-ca\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.543572 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-config\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.543971 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-proxy-ca-bundles\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.545280 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-serving-cert\") pod \"route-controller-manager-d6bb47b48-q6nn5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.547668 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cdd56f2-da8a-4f2a-93d2-f141f003528c-serving-cert\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.560374 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7wq6\" (UniqueName: \"kubernetes.io/projected/4cdd56f2-da8a-4f2a-93d2-f141f003528c-kube-api-access-v7wq6\") pod \"controller-manager-84df8fddf8-8fnmd\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.560536 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgmlj\" (UniqueName: \"kubernetes.io/projected/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-kube-api-access-hgmlj\") pod \"route-controller-manager-d6bb47b48-q6nn5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.577912 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.585624 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:11 crc kubenswrapper[4820]: I0203 12:11:11.898459 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5"] Feb 03 12:11:12 crc kubenswrapper[4820]: I0203 12:11:12.057177 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" event={"ID":"8e57e1cf-921d-4ed0-b5a3-e15f970366f5","Type":"ContainerStarted","Data":"395b6257d1891b29efee8a734b5eb8e9236ce76754521b1d2e56caaeda747197"} Feb 03 12:11:12 crc kubenswrapper[4820]: I0203 12:11:12.064668 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84df8fddf8-8fnmd"] Feb 03 12:11:12 crc kubenswrapper[4820]: W0203 12:11:12.077323 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cdd56f2_da8a_4f2a_93d2_f141f003528c.slice/crio-a963378ab0eb921070fd448079c86eefbbf00a42a67b0223162778ec99b0cd31 WatchSource:0}: Error finding container a963378ab0eb921070fd448079c86eefbbf00a42a67b0223162778ec99b0cd31: Status 404 returned error can't find the container with id a963378ab0eb921070fd448079c86eefbbf00a42a67b0223162778ec99b0cd31 Feb 03 12:11:13 crc kubenswrapper[4820]: I0203 12:11:13.072604 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" event={"ID":"8e57e1cf-921d-4ed0-b5a3-e15f970366f5","Type":"ContainerStarted","Data":"658d4bdfab4f7034f28b0740ea05ad429ed84df5417112eb2fad79ef9f63950e"} Feb 03 12:11:13 crc kubenswrapper[4820]: I0203 12:11:13.072950 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:13 crc kubenswrapper[4820]: I0203 12:11:13.074513 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" event={"ID":"4cdd56f2-da8a-4f2a-93d2-f141f003528c","Type":"ContainerStarted","Data":"f828f020dd264b0720149f9591f28eb2921d745273a27475925582cfd54f74bb"} Feb 03 12:11:13 crc kubenswrapper[4820]: I0203 12:11:13.074552 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" event={"ID":"4cdd56f2-da8a-4f2a-93d2-f141f003528c","Type":"ContainerStarted","Data":"a963378ab0eb921070fd448079c86eefbbf00a42a67b0223162778ec99b0cd31"} Feb 03 12:11:13 crc kubenswrapper[4820]: I0203 12:11:13.075250 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:13 crc kubenswrapper[4820]: I0203 12:11:13.077660 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:13 crc kubenswrapper[4820]: I0203 12:11:13.078751 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:13 crc kubenswrapper[4820]: I0203 12:11:13.093041 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" podStartSLOduration=4.093020844 podStartE2EDuration="4.093020844s" podCreationTimestamp="2026-02-03 12:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:11:13.090412501 +0000 UTC m=+390.613488385" watchObservedRunningTime="2026-02-03 12:11:13.093020844 +0000 UTC m=+390.616096708" Feb 03 12:11:13 crc kubenswrapper[4820]: I0203 12:11:13.122758 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" podStartSLOduration=4.122720692 podStartE2EDuration="4.122720692s" podCreationTimestamp="2026-02-03 12:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:11:13.120390268 +0000 UTC m=+390.643466132" watchObservedRunningTime="2026-02-03 12:11:13.122720692 +0000 UTC m=+390.645796556" Feb 03 12:11:13 crc kubenswrapper[4820]: I0203 12:11:13.567367 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-lnc22" Feb 03 12:11:31 crc kubenswrapper[4820]: I0203 12:11:31.365851 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:11:31 crc kubenswrapper[4820]: I0203 12:11:31.366413 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:11:41 crc kubenswrapper[4820]: I0203 12:11:41.554827 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4gskq"] Feb 03 12:11:48 crc kubenswrapper[4820]: I0203 12:11:48.449660 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5xqrp"] Feb 03 12:11:48 crc kubenswrapper[4820]: I0203 12:11:48.450031 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5xqrp" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" containerName="registry-server" containerID="cri-o://0732f8fdf7726b60c3240c92179891bbfb723153fcfd43a82cfa0903ecd438cd" gracePeriod=2 Feb 03 12:11:48 crc kubenswrapper[4820]: I0203 12:11:48.551019 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ntpgz"] Feb 03 12:11:48 crc kubenswrapper[4820]: I0203 12:11:48.551338 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ntpgz" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" containerName="registry-server" containerID="cri-o://f5a067401a3ecc8dfcc7db0c46f16d23ff5d49eb882b82123e9b6adbe2ebcc12" gracePeriod=2 Feb 03 12:11:48 crc kubenswrapper[4820]: I0203 12:11:48.872802 4820 generic.go:334] "Generic (PLEG): container finished" podID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" containerID="0732f8fdf7726b60c3240c92179891bbfb723153fcfd43a82cfa0903ecd438cd" exitCode=0 Feb 03 12:11:48 crc kubenswrapper[4820]: I0203 12:11:48.872884 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xqrp" event={"ID":"38d510f8-dde9-46b4-965e-9d2726b5f0d7","Type":"ContainerDied","Data":"0732f8fdf7726b60c3240c92179891bbfb723153fcfd43a82cfa0903ecd438cd"} Feb 03 12:11:48 crc kubenswrapper[4820]: I0203 12:11:48.876968 4820 generic.go:334] "Generic (PLEG): container finished" podID="f2daa931-03c0-484d-9ea2-a30607c5f034" containerID="f5a067401a3ecc8dfcc7db0c46f16d23ff5d49eb882b82123e9b6adbe2ebcc12" exitCode=0 Feb 03 12:11:48 crc kubenswrapper[4820]: I0203 12:11:48.876995 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpgz" event={"ID":"f2daa931-03c0-484d-9ea2-a30607c5f034","Type":"ContainerDied","Data":"f5a067401a3ecc8dfcc7db0c46f16d23ff5d49eb882b82123e9b6adbe2ebcc12"} Feb 03 12:11:49 crc kubenswrapper[4820]: I0203 12:11:49.330200 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:11:49 crc kubenswrapper[4820]: I0203 12:11:49.451593 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-544nm\" (UniqueName: \"kubernetes.io/projected/f2daa931-03c0-484d-9ea2-a30607c5f034-kube-api-access-544nm\") pod \"f2daa931-03c0-484d-9ea2-a30607c5f034\" (UID: \"f2daa931-03c0-484d-9ea2-a30607c5f034\") " Feb 03 12:11:49 crc kubenswrapper[4820]: I0203 12:11:49.451694 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2daa931-03c0-484d-9ea2-a30607c5f034-catalog-content\") pod \"f2daa931-03c0-484d-9ea2-a30607c5f034\" (UID: \"f2daa931-03c0-484d-9ea2-a30607c5f034\") " Feb 03 12:11:49 crc kubenswrapper[4820]: I0203 12:11:49.451863 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2daa931-03c0-484d-9ea2-a30607c5f034-utilities\") pod \"f2daa931-03c0-484d-9ea2-a30607c5f034\" (UID: \"f2daa931-03c0-484d-9ea2-a30607c5f034\") " Feb 03 12:11:49 crc kubenswrapper[4820]: I0203 12:11:49.453254 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2daa931-03c0-484d-9ea2-a30607c5f034-utilities" (OuterVolumeSpecName: "utilities") pod "f2daa931-03c0-484d-9ea2-a30607c5f034" (UID: "f2daa931-03c0-484d-9ea2-a30607c5f034"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:11:49 crc kubenswrapper[4820]: I0203 12:11:49.470662 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2daa931-03c0-484d-9ea2-a30607c5f034-kube-api-access-544nm" (OuterVolumeSpecName: "kube-api-access-544nm") pod "f2daa931-03c0-484d-9ea2-a30607c5f034" (UID: "f2daa931-03c0-484d-9ea2-a30607c5f034"). InnerVolumeSpecName "kube-api-access-544nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:11:49 crc kubenswrapper[4820]: I0203 12:11:49.491770 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:11:49 crc kubenswrapper[4820]: I0203 12:11:49.509692 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2daa931-03c0-484d-9ea2-a30607c5f034-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2daa931-03c0-484d-9ea2-a30607c5f034" (UID: "f2daa931-03c0-484d-9ea2-a30607c5f034"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:11:50 crc kubenswrapper[4820]: I0203 12:11:50.332089 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d510f8-dde9-46b4-965e-9d2726b5f0d7-catalog-content\") pod \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\" (UID: \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\") " Feb 03 12:11:50 crc kubenswrapper[4820]: I0203 12:11:50.332230 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d510f8-dde9-46b4-965e-9d2726b5f0d7-utilities\") pod \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\" (UID: \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\") " Feb 03 12:11:50 crc kubenswrapper[4820]: I0203 12:11:50.332577 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-544nm\" (UniqueName: \"kubernetes.io/projected/f2daa931-03c0-484d-9ea2-a30607c5f034-kube-api-access-544nm\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:50 crc kubenswrapper[4820]: I0203 12:11:50.332593 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2daa931-03c0-484d-9ea2-a30607c5f034-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:50 crc kubenswrapper[4820]: I0203 12:11:50.332603 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2daa931-03c0-484d-9ea2-a30607c5f034-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:50 crc kubenswrapper[4820]: I0203 12:11:50.333403 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38d510f8-dde9-46b4-965e-9d2726b5f0d7-utilities" (OuterVolumeSpecName: "utilities") pod "38d510f8-dde9-46b4-965e-9d2726b5f0d7" (UID: "38d510f8-dde9-46b4-965e-9d2726b5f0d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:11:50 crc kubenswrapper[4820]: I0203 12:11:50.353266 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bl6zg"] Feb 03 12:11:50 crc kubenswrapper[4820]: I0203 12:11:50.353511 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bl6zg" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" containerName="registry-server" containerID="cri-o://9f417572a56b2fbb043f515ec88bc97aee5b8687fb92182b04068795cb737d03" gracePeriod=2 Feb 03 12:11:50 crc kubenswrapper[4820]: I0203 12:11:50.430362 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpgz" event={"ID":"f2daa931-03c0-484d-9ea2-a30607c5f034","Type":"ContainerDied","Data":"c49b3739b18f0b02c507c5a3fb43a838e2d5a165338a7f5ee1a9cdeaf074f967"} Feb 03 12:11:50 crc kubenswrapper[4820]: I0203 12:11:50.430497 4820 scope.go:117] "RemoveContainer" containerID="f5a067401a3ecc8dfcc7db0c46f16d23ff5d49eb882b82123e9b6adbe2ebcc12" Feb 03 12:11:50 crc kubenswrapper[4820]: I0203 12:11:50.434676 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frk8k\" (UniqueName: \"kubernetes.io/projected/38d510f8-dde9-46b4-965e-9d2726b5f0d7-kube-api-access-frk8k\") pod \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\" (UID: \"38d510f8-dde9-46b4-965e-9d2726b5f0d7\") " Feb 03 12:11:50 crc kubenswrapper[4820]: I0203 12:11:50.435402 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d510f8-dde9-46b4-965e-9d2726b5f0d7-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:50 crc kubenswrapper[4820]: I0203 12:11:50.701959 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntpgz" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.023471 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84df8fddf8-8fnmd"] Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.023694 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" podUID="4cdd56f2-da8a-4f2a-93d2-f141f003528c" containerName="controller-manager" containerID="cri-o://f828f020dd264b0720149f9591f28eb2921d745273a27475925582cfd54f74bb" gracePeriod=30 Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.030809 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d510f8-dde9-46b4-965e-9d2726b5f0d7-kube-api-access-frk8k" (OuterVolumeSpecName: "kube-api-access-frk8k") pod "38d510f8-dde9-46b4-965e-9d2726b5f0d7" (UID: "38d510f8-dde9-46b4-965e-9d2726b5f0d7"). InnerVolumeSpecName "kube-api-access-frk8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.031423 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xqrp" event={"ID":"38d510f8-dde9-46b4-965e-9d2726b5f0d7","Type":"ContainerDied","Data":"7e80acca71139daf1031234ae3056308522e872ea642047f40c8f7346edbdfe9"} Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.031523 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xqrp" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.066622 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5"] Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.067146 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" podUID="8e57e1cf-921d-4ed0-b5a3-e15f970366f5" containerName="route-controller-manager" containerID="cri-o://658d4bdfab4f7034f28b0740ea05ad429ed84df5417112eb2fad79ef9f63950e" gracePeriod=30 Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.080152 4820 scope.go:117] "RemoveContainer" containerID="032b9bf0325e916458f372bb1f0f3f746cb0809ac855b0d988f5977599b90be7" Feb 03 12:11:51 crc kubenswrapper[4820]: E0203 12:11:51.086116 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2daa931_03c0_484d_9ea2_a30607c5f034.slice/crio-c49b3739b18f0b02c507c5a3fb43a838e2d5a165338a7f5ee1a9cdeaf074f967\": RecentStats: unable to find data in memory cache]" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.101805 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frk8k\" (UniqueName: \"kubernetes.io/projected/38d510f8-dde9-46b4-965e-9d2726b5f0d7-kube-api-access-frk8k\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.114080 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ntpgz"] Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.118067 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ntpgz"] Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.128838 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38d510f8-dde9-46b4-965e-9d2726b5f0d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38d510f8-dde9-46b4-965e-9d2726b5f0d7" (UID: "38d510f8-dde9-46b4-965e-9d2726b5f0d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.135310 4820 scope.go:117] "RemoveContainer" containerID="4a9466d89567b8e8f68c8f4f2ffabd9fa972f539ff8ef33b35c7779e9df5ed30" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.157402 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" path="/var/lib/kubelet/pods/f2daa931-03c0-484d-9ea2-a30607c5f034/volumes" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.202967 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d510f8-dde9-46b4-965e-9d2726b5f0d7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.362786 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5xqrp"] Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.396170 4820 scope.go:117] "RemoveContainer" containerID="0732f8fdf7726b60c3240c92179891bbfb723153fcfd43a82cfa0903ecd438cd" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.404022 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5xqrp"] Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.415329 4820 scope.go:117] "RemoveContainer" containerID="c8c5d6571927fff3f81a8cbd8943359663af34ac678f3d57daac16e996ac8918" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.444203 4820 scope.go:117] "RemoveContainer" containerID="3986a59082f562ad33e23e77b2b3defb1c3848dd961c1961387c69070fce690e" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.529706 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pl5wr"] Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.530131 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pl5wr" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" containerName="registry-server" containerID="cri-o://2ec41e34503e4bfda2198a5f685ad5041bf0ce52d86e197bfddb451d4eed0af9" gracePeriod=2 Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.576947 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.580710 4820 patch_prober.go:28] interesting pod/controller-manager-84df8fddf8-8fnmd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.580815 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" podUID="4cdd56f2-da8a-4f2a-93d2-f141f003528c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.707554 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-config\") pod \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.707623 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-serving-cert\") pod \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.707665 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-client-ca\") pod \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.707716 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgmlj\" (UniqueName: \"kubernetes.io/projected/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-kube-api-access-hgmlj\") pod \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\" (UID: \"8e57e1cf-921d-4ed0-b5a3-e15f970366f5\") " Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.708475 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "8e57e1cf-921d-4ed0-b5a3-e15f970366f5" (UID: "8e57e1cf-921d-4ed0-b5a3-e15f970366f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.708652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-config" (OuterVolumeSpecName: "config") pod "8e57e1cf-921d-4ed0-b5a3-e15f970366f5" (UID: "8e57e1cf-921d-4ed0-b5a3-e15f970366f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.713510 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8e57e1cf-921d-4ed0-b5a3-e15f970366f5" (UID: "8e57e1cf-921d-4ed0-b5a3-e15f970366f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.713639 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-kube-api-access-hgmlj" (OuterVolumeSpecName: "kube-api-access-hgmlj") pod "8e57e1cf-921d-4ed0-b5a3-e15f970366f5" (UID: "8e57e1cf-921d-4ed0-b5a3-e15f970366f5"). InnerVolumeSpecName "kube-api-access-hgmlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.808820 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.809204 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.809223 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.809237 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgmlj\" (UniqueName: \"kubernetes.io/projected/8e57e1cf-921d-4ed0-b5a3-e15f970366f5-kube-api-access-hgmlj\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.897976 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:11:51 crc kubenswrapper[4820]: I0203 12:11:51.983926 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.011241 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.014488 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl98q\" (UniqueName: \"kubernetes.io/projected/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-kube-api-access-fl98q\") pod \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\" (UID: \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\") " Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.014551 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-utilities\") pod \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\" (UID: \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\") " Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.014572 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-catalog-content\") pod \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\" (UID: \"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5\") " Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.015582 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-utilities" (OuterVolumeSpecName: "utilities") pod "ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" (UID: "ef96ca29-ba6e-42c7-b992-898fb5f7f7b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.018188 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-kube-api-access-fl98q" (OuterVolumeSpecName: "kube-api-access-fl98q") pod "ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" (UID: "ef96ca29-ba6e-42c7-b992-898fb5f7f7b5"). InnerVolumeSpecName "kube-api-access-fl98q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.043710 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" (UID: "ef96ca29-ba6e-42c7-b992-898fb5f7f7b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.051097 4820 generic.go:334] "Generic (PLEG): container finished" podID="8e57e1cf-921d-4ed0-b5a3-e15f970366f5" containerID="658d4bdfab4f7034f28b0740ea05ad429ed84df5417112eb2fad79ef9f63950e" exitCode=0 Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.051172 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" event={"ID":"8e57e1cf-921d-4ed0-b5a3-e15f970366f5","Type":"ContainerDied","Data":"658d4bdfab4f7034f28b0740ea05ad429ed84df5417112eb2fad79ef9f63950e"} Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.051234 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" event={"ID":"8e57e1cf-921d-4ed0-b5a3-e15f970366f5","Type":"ContainerDied","Data":"395b6257d1891b29efee8a734b5eb8e9236ce76754521b1d2e56caaeda747197"} Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.051242 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.051272 4820 scope.go:117] "RemoveContainer" containerID="658d4bdfab4f7034f28b0740ea05ad429ed84df5417112eb2fad79ef9f63950e" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.063466 4820 generic.go:334] "Generic (PLEG): container finished" podID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" containerID="9f417572a56b2fbb043f515ec88bc97aee5b8687fb92182b04068795cb737d03" exitCode=0 Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.063554 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bl6zg" event={"ID":"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5","Type":"ContainerDied","Data":"9f417572a56b2fbb043f515ec88bc97aee5b8687fb92182b04068795cb737d03"} Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.063595 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bl6zg" event={"ID":"ef96ca29-ba6e-42c7-b992-898fb5f7f7b5","Type":"ContainerDied","Data":"54c5703031482d6ab3ce8b6e59eb10dfc6364bf867895619c7939d7dd4a8c250"} Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.063835 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bl6zg" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.070615 4820 scope.go:117] "RemoveContainer" containerID="658d4bdfab4f7034f28b0740ea05ad429ed84df5417112eb2fad79ef9f63950e" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.071641 4820 generic.go:334] "Generic (PLEG): container finished" podID="4cdd56f2-da8a-4f2a-93d2-f141f003528c" containerID="f828f020dd264b0720149f9591f28eb2921d745273a27475925582cfd54f74bb" exitCode=0 Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.071783 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" event={"ID":"4cdd56f2-da8a-4f2a-93d2-f141f003528c","Type":"ContainerDied","Data":"f828f020dd264b0720149f9591f28eb2921d745273a27475925582cfd54f74bb"} Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.071822 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" event={"ID":"4cdd56f2-da8a-4f2a-93d2-f141f003528c","Type":"ContainerDied","Data":"a963378ab0eb921070fd448079c86eefbbf00a42a67b0223162778ec99b0cd31"} Feb 03 12:11:52 crc kubenswrapper[4820]: E0203 12:11:52.071936 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"658d4bdfab4f7034f28b0740ea05ad429ed84df5417112eb2fad79ef9f63950e\": container with ID starting with 658d4bdfab4f7034f28b0740ea05ad429ed84df5417112eb2fad79ef9f63950e not found: ID does not exist" containerID="658d4bdfab4f7034f28b0740ea05ad429ed84df5417112eb2fad79ef9f63950e" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.071979 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658d4bdfab4f7034f28b0740ea05ad429ed84df5417112eb2fad79ef9f63950e"} err="failed to get container status \"658d4bdfab4f7034f28b0740ea05ad429ed84df5417112eb2fad79ef9f63950e\": rpc error: code = NotFound desc = could not find container \"658d4bdfab4f7034f28b0740ea05ad429ed84df5417112eb2fad79ef9f63950e\": container with ID starting with 658d4bdfab4f7034f28b0740ea05ad429ed84df5417112eb2fad79ef9f63950e not found: ID does not exist" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.072015 4820 scope.go:117] "RemoveContainer" containerID="9f417572a56b2fbb043f515ec88bc97aee5b8687fb92182b04068795cb737d03" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.072012 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84df8fddf8-8fnmd" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.083190 4820 generic.go:334] "Generic (PLEG): container finished" podID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" containerID="2ec41e34503e4bfda2198a5f685ad5041bf0ce52d86e197bfddb451d4eed0af9" exitCode=0 Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.083252 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl5wr" event={"ID":"2341b8b4-d207-4c89-8e46-a1b6b787afc8","Type":"ContainerDied","Data":"2ec41e34503e4bfda2198a5f685ad5041bf0ce52d86e197bfddb451d4eed0af9"} Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.083300 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl5wr" event={"ID":"2341b8b4-d207-4c89-8e46-a1b6b787afc8","Type":"ContainerDied","Data":"69a9975695d4fde9342cf018a36d3e58ec9cccf10e5dd957ea23268c964cd6ab"} Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.083547 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pl5wr" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.092440 4820 scope.go:117] "RemoveContainer" containerID="a0766260437e3d831ddb87f05105ecbe2c5b49fdd749003e315d19288e803a38" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.110122 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bl6zg"] Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.114519 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bl6zg"] Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.115927 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7wq6\" (UniqueName: \"kubernetes.io/projected/4cdd56f2-da8a-4f2a-93d2-f141f003528c-kube-api-access-v7wq6\") pod \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.116168 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cdd56f2-da8a-4f2a-93d2-f141f003528c-serving-cert\") pod \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.116200 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-proxy-ca-bundles\") pod \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.116262 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-config\") pod \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.116317 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2341b8b4-d207-4c89-8e46-a1b6b787afc8-catalog-content\") pod \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\" (UID: \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\") " Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.116358 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2341b8b4-d207-4c89-8e46-a1b6b787afc8-utilities\") pod \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\" (UID: \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\") " Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.116396 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-client-ca\") pod \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\" (UID: \"4cdd56f2-da8a-4f2a-93d2-f141f003528c\") " Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.116424 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mwf5\" (UniqueName: \"kubernetes.io/projected/2341b8b4-d207-4c89-8e46-a1b6b787afc8-kube-api-access-6mwf5\") pod \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\" (UID: \"2341b8b4-d207-4c89-8e46-a1b6b787afc8\") " Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.116732 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.116758 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.116797 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl98q\" (UniqueName: \"kubernetes.io/projected/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5-kube-api-access-fl98q\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.121792 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2341b8b4-d207-4c89-8e46-a1b6b787afc8-kube-api-access-6mwf5" (OuterVolumeSpecName: "kube-api-access-6mwf5") pod "2341b8b4-d207-4c89-8e46-a1b6b787afc8" (UID: "2341b8b4-d207-4c89-8e46-a1b6b787afc8"). InnerVolumeSpecName "kube-api-access-6mwf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.121563 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-config" (OuterVolumeSpecName: "config") pod "4cdd56f2-da8a-4f2a-93d2-f141f003528c" (UID: "4cdd56f2-da8a-4f2a-93d2-f141f003528c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.123616 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4cdd56f2-da8a-4f2a-93d2-f141f003528c" (UID: "4cdd56f2-da8a-4f2a-93d2-f141f003528c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.124512 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-client-ca" (OuterVolumeSpecName: "client-ca") pod "4cdd56f2-da8a-4f2a-93d2-f141f003528c" (UID: "4cdd56f2-da8a-4f2a-93d2-f141f003528c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.127397 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2341b8b4-d207-4c89-8e46-a1b6b787afc8-utilities" (OuterVolumeSpecName: "utilities") pod "2341b8b4-d207-4c89-8e46-a1b6b787afc8" (UID: "2341b8b4-d207-4c89-8e46-a1b6b787afc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.130027 4820 scope.go:117] "RemoveContainer" containerID="a94a9983f026ff38b4043d5ab9d2c5ce7684277805e9b23d468e852d606b7983" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.130009 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cdd56f2-da8a-4f2a-93d2-f141f003528c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4cdd56f2-da8a-4f2a-93d2-f141f003528c" (UID: "4cdd56f2-da8a-4f2a-93d2-f141f003528c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.131527 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cdd56f2-da8a-4f2a-93d2-f141f003528c-kube-api-access-v7wq6" (OuterVolumeSpecName: "kube-api-access-v7wq6") pod "4cdd56f2-da8a-4f2a-93d2-f141f003528c" (UID: "4cdd56f2-da8a-4f2a-93d2-f141f003528c"). InnerVolumeSpecName "kube-api-access-v7wq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.132102 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5"] Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.135805 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6bb47b48-q6nn5"] Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.176082 4820 scope.go:117] "RemoveContainer" containerID="9f417572a56b2fbb043f515ec88bc97aee5b8687fb92182b04068795cb737d03" Feb 03 12:11:52 crc kubenswrapper[4820]: E0203 12:11:52.176643 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f417572a56b2fbb043f515ec88bc97aee5b8687fb92182b04068795cb737d03\": container with ID starting with 9f417572a56b2fbb043f515ec88bc97aee5b8687fb92182b04068795cb737d03 not found: ID does not exist" containerID="9f417572a56b2fbb043f515ec88bc97aee5b8687fb92182b04068795cb737d03" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.176694 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f417572a56b2fbb043f515ec88bc97aee5b8687fb92182b04068795cb737d03"} err="failed to get container status \"9f417572a56b2fbb043f515ec88bc97aee5b8687fb92182b04068795cb737d03\": rpc error: code = NotFound desc = could not find container \"9f417572a56b2fbb043f515ec88bc97aee5b8687fb92182b04068795cb737d03\": container with ID starting with 9f417572a56b2fbb043f515ec88bc97aee5b8687fb92182b04068795cb737d03 not found: ID does not exist" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.176722 4820 scope.go:117] "RemoveContainer" containerID="a0766260437e3d831ddb87f05105ecbe2c5b49fdd749003e315d19288e803a38" Feb 03 12:11:52 crc kubenswrapper[4820]: E0203 12:11:52.177234 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0766260437e3d831ddb87f05105ecbe2c5b49fdd749003e315d19288e803a38\": container with ID starting with a0766260437e3d831ddb87f05105ecbe2c5b49fdd749003e315d19288e803a38 not found: ID does not exist" containerID="a0766260437e3d831ddb87f05105ecbe2c5b49fdd749003e315d19288e803a38" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.177271 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0766260437e3d831ddb87f05105ecbe2c5b49fdd749003e315d19288e803a38"} err="failed to get container status \"a0766260437e3d831ddb87f05105ecbe2c5b49fdd749003e315d19288e803a38\": rpc error: code = NotFound desc = could not find container \"a0766260437e3d831ddb87f05105ecbe2c5b49fdd749003e315d19288e803a38\": container with ID starting with a0766260437e3d831ddb87f05105ecbe2c5b49fdd749003e315d19288e803a38 not found: ID does not exist" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.177295 4820 scope.go:117] "RemoveContainer" containerID="a94a9983f026ff38b4043d5ab9d2c5ce7684277805e9b23d468e852d606b7983" Feb 03 12:11:52 crc kubenswrapper[4820]: E0203 12:11:52.177577 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a94a9983f026ff38b4043d5ab9d2c5ce7684277805e9b23d468e852d606b7983\": container with ID starting with a94a9983f026ff38b4043d5ab9d2c5ce7684277805e9b23d468e852d606b7983 not found: ID does not exist" containerID="a94a9983f026ff38b4043d5ab9d2c5ce7684277805e9b23d468e852d606b7983" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.177613 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a94a9983f026ff38b4043d5ab9d2c5ce7684277805e9b23d468e852d606b7983"} err="failed to get container status \"a94a9983f026ff38b4043d5ab9d2c5ce7684277805e9b23d468e852d606b7983\": rpc error: code = NotFound desc = could not find container \"a94a9983f026ff38b4043d5ab9d2c5ce7684277805e9b23d468e852d606b7983\": container with ID starting with a94a9983f026ff38b4043d5ab9d2c5ce7684277805e9b23d468e852d606b7983 not found: ID does not exist" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.177635 4820 scope.go:117] "RemoveContainer" containerID="f828f020dd264b0720149f9591f28eb2921d745273a27475925582cfd54f74bb" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.197265 4820 scope.go:117] "RemoveContainer" containerID="f828f020dd264b0720149f9591f28eb2921d745273a27475925582cfd54f74bb" Feb 03 12:11:52 crc kubenswrapper[4820]: E0203 12:11:52.197659 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f828f020dd264b0720149f9591f28eb2921d745273a27475925582cfd54f74bb\": container with ID starting with f828f020dd264b0720149f9591f28eb2921d745273a27475925582cfd54f74bb not found: ID does not exist" containerID="f828f020dd264b0720149f9591f28eb2921d745273a27475925582cfd54f74bb" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.197699 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f828f020dd264b0720149f9591f28eb2921d745273a27475925582cfd54f74bb"} err="failed to get container status \"f828f020dd264b0720149f9591f28eb2921d745273a27475925582cfd54f74bb\": rpc error: code = NotFound desc = could not find container \"f828f020dd264b0720149f9591f28eb2921d745273a27475925582cfd54f74bb\": container with ID starting with f828f020dd264b0720149f9591f28eb2921d745273a27475925582cfd54f74bb not found: ID does not exist" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.197731 4820 scope.go:117] "RemoveContainer" containerID="2ec41e34503e4bfda2198a5f685ad5041bf0ce52d86e197bfddb451d4eed0af9" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.211483 4820 scope.go:117] "RemoveContainer" containerID="69618bb83b381d85f3796db35424c0e893b61f0ed5bf0899ab50a285b23b25a1" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.218170 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.218197 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.218211 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2341b8b4-d207-4c89-8e46-a1b6b787afc8-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.218220 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mwf5\" (UniqueName: \"kubernetes.io/projected/2341b8b4-d207-4c89-8e46-a1b6b787afc8-kube-api-access-6mwf5\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.218231 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7wq6\" (UniqueName: \"kubernetes.io/projected/4cdd56f2-da8a-4f2a-93d2-f141f003528c-kube-api-access-v7wq6\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.218239 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cdd56f2-da8a-4f2a-93d2-f141f003528c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.218247 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cdd56f2-da8a-4f2a-93d2-f141f003528c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.234168 4820 scope.go:117] "RemoveContainer" containerID="1b8f24cedec5a3c4e2bbb8dcffaf4a7058c60e8ea54679ae1d72085d6ceb9181" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.253221 4820 scope.go:117] "RemoveContainer" containerID="2ec41e34503e4bfda2198a5f685ad5041bf0ce52d86e197bfddb451d4eed0af9" Feb 03 12:11:52 crc kubenswrapper[4820]: E0203 12:11:52.253787 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ec41e34503e4bfda2198a5f685ad5041bf0ce52d86e197bfddb451d4eed0af9\": container with ID starting with 2ec41e34503e4bfda2198a5f685ad5041bf0ce52d86e197bfddb451d4eed0af9 not found: ID does not exist" containerID="2ec41e34503e4bfda2198a5f685ad5041bf0ce52d86e197bfddb451d4eed0af9" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.253822 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec41e34503e4bfda2198a5f685ad5041bf0ce52d86e197bfddb451d4eed0af9"} err="failed to get container status \"2ec41e34503e4bfda2198a5f685ad5041bf0ce52d86e197bfddb451d4eed0af9\": rpc error: code = NotFound desc = could not find container \"2ec41e34503e4bfda2198a5f685ad5041bf0ce52d86e197bfddb451d4eed0af9\": container with ID starting with 2ec41e34503e4bfda2198a5f685ad5041bf0ce52d86e197bfddb451d4eed0af9 not found: ID does not exist" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.253845 4820 scope.go:117] "RemoveContainer" containerID="69618bb83b381d85f3796db35424c0e893b61f0ed5bf0899ab50a285b23b25a1" Feb 03 12:11:52 crc kubenswrapper[4820]: E0203 12:11:52.254521 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69618bb83b381d85f3796db35424c0e893b61f0ed5bf0899ab50a285b23b25a1\": container with ID starting with 69618bb83b381d85f3796db35424c0e893b61f0ed5bf0899ab50a285b23b25a1 not found: ID does not exist" containerID="69618bb83b381d85f3796db35424c0e893b61f0ed5bf0899ab50a285b23b25a1" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.254624 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69618bb83b381d85f3796db35424c0e893b61f0ed5bf0899ab50a285b23b25a1"} err="failed to get container status \"69618bb83b381d85f3796db35424c0e893b61f0ed5bf0899ab50a285b23b25a1\": rpc error: code = NotFound desc = could not find container \"69618bb83b381d85f3796db35424c0e893b61f0ed5bf0899ab50a285b23b25a1\": container with ID starting with 69618bb83b381d85f3796db35424c0e893b61f0ed5bf0899ab50a285b23b25a1 not found: ID does not exist" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.254725 4820 scope.go:117] "RemoveContainer" containerID="1b8f24cedec5a3c4e2bbb8dcffaf4a7058c60e8ea54679ae1d72085d6ceb9181" Feb 03 12:11:52 crc kubenswrapper[4820]: E0203 12:11:52.255677 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8f24cedec5a3c4e2bbb8dcffaf4a7058c60e8ea54679ae1d72085d6ceb9181\": container with ID starting with 1b8f24cedec5a3c4e2bbb8dcffaf4a7058c60e8ea54679ae1d72085d6ceb9181 not found: ID does not exist" containerID="1b8f24cedec5a3c4e2bbb8dcffaf4a7058c60e8ea54679ae1d72085d6ceb9181" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.255711 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8f24cedec5a3c4e2bbb8dcffaf4a7058c60e8ea54679ae1d72085d6ceb9181"} err="failed to get container status \"1b8f24cedec5a3c4e2bbb8dcffaf4a7058c60e8ea54679ae1d72085d6ceb9181\": rpc error: code = NotFound desc = could not find container \"1b8f24cedec5a3c4e2bbb8dcffaf4a7058c60e8ea54679ae1d72085d6ceb9181\": container with ID starting with 1b8f24cedec5a3c4e2bbb8dcffaf4a7058c60e8ea54679ae1d72085d6ceb9181 not found: ID does not exist" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.257300 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2341b8b4-d207-4c89-8e46-a1b6b787afc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2341b8b4-d207-4c89-8e46-a1b6b787afc8" (UID: "2341b8b4-d207-4c89-8e46-a1b6b787afc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.320504 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2341b8b4-d207-4c89-8e46-a1b6b787afc8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.410516 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84df8fddf8-8fnmd"] Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.414114 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84df8fddf8-8fnmd"] Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.420503 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pl5wr"] Feb 03 12:11:52 crc kubenswrapper[4820]: I0203 12:11:52.425693 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pl5wr"] Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.005774 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bdf699b59-9kjxj"] Feb 03 12:11:53 crc kubenswrapper[4820]: E0203 12:11:53.006083 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" containerName="registry-server" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006104 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" containerName="registry-server" Feb 03 12:11:53 crc kubenswrapper[4820]: E0203 12:11:53.006118 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" containerName="extract-content" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006126 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" containerName="extract-content" Feb 03 12:11:53 crc kubenswrapper[4820]: E0203 12:11:53.006139 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" containerName="extract-content" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006149 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" containerName="extract-content" Feb 03 12:11:53 crc kubenswrapper[4820]: E0203 12:11:53.006158 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" containerName="extract-content" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006166 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" containerName="extract-content" Feb 03 12:11:53 crc kubenswrapper[4820]: E0203 12:11:53.006175 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cdd56f2-da8a-4f2a-93d2-f141f003528c" containerName="controller-manager" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006182 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cdd56f2-da8a-4f2a-93d2-f141f003528c" containerName="controller-manager" Feb 03 12:11:53 crc kubenswrapper[4820]: E0203 12:11:53.006191 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" containerName="extract-utilities" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006199 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" containerName="extract-utilities" Feb 03 12:11:53 crc kubenswrapper[4820]: E0203 12:11:53.006253 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" containerName="extract-utilities" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006261 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" containerName="extract-utilities" Feb 03 12:11:53 crc kubenswrapper[4820]: E0203 12:11:53.006272 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" containerName="registry-server" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006306 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" containerName="registry-server" Feb 03 12:11:53 crc kubenswrapper[4820]: E0203 12:11:53.006326 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" containerName="extract-content" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006333 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" containerName="extract-content" Feb 03 12:11:53 crc kubenswrapper[4820]: E0203 12:11:53.006350 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" containerName="registry-server" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006358 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" containerName="registry-server" Feb 03 12:11:53 crc kubenswrapper[4820]: E0203 12:11:53.006372 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e57e1cf-921d-4ed0-b5a3-e15f970366f5" containerName="route-controller-manager" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006380 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e57e1cf-921d-4ed0-b5a3-e15f970366f5" containerName="route-controller-manager" Feb 03 12:11:53 crc kubenswrapper[4820]: E0203 12:11:53.006393 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" containerName="extract-utilities" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006403 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" containerName="extract-utilities" Feb 03 12:11:53 crc kubenswrapper[4820]: E0203 12:11:53.006415 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" containerName="extract-utilities" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006423 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" containerName="extract-utilities" Feb 03 12:11:53 crc kubenswrapper[4820]: E0203 12:11:53.006431 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" containerName="registry-server" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006437 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" containerName="registry-server" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006573 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" containerName="registry-server" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006588 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cdd56f2-da8a-4f2a-93d2-f141f003528c" containerName="controller-manager" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006604 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e57e1cf-921d-4ed0-b5a3-e15f970366f5" containerName="route-controller-manager" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006614 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" containerName="registry-server" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006624 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2daa931-03c0-484d-9ea2-a30607c5f034" containerName="registry-server" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.006637 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" containerName="registry-server" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.007196 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.009147 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6"] Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.009798 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.019068 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.019233 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.019257 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.019275 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.020309 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.020361 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.020489 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.020603 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.020881 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.020943 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.021163 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.023047 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.024206 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bdf699b59-9kjxj"] Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.024363 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.032397 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6"] Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.132656 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-config\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.133050 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ed4886-b4df-49ee-916a-a1599baee4b1-config\") pod \"route-controller-manager-54c4dc7ccb-g8pg6\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.133221 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-serving-cert\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.133344 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ed4886-b4df-49ee-916a-a1599baee4b1-serving-cert\") pod \"route-controller-manager-54c4dc7ccb-g8pg6\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.133475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-client-ca\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.133590 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njs8s\" (UniqueName: \"kubernetes.io/projected/01ed4886-b4df-49ee-916a-a1599baee4b1-kube-api-access-njs8s\") pod \"route-controller-manager-54c4dc7ccb-g8pg6\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.133719 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxvb\" (UniqueName: \"kubernetes.io/projected/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-kube-api-access-dqxvb\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.133865 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ed4886-b4df-49ee-916a-a1599baee4b1-client-ca\") pod \"route-controller-manager-54c4dc7ccb-g8pg6\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.134008 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-proxy-ca-bundles\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.151229 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2341b8b4-d207-4c89-8e46-a1b6b787afc8" path="/var/lib/kubelet/pods/2341b8b4-d207-4c89-8e46-a1b6b787afc8/volumes" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.152455 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38d510f8-dde9-46b4-965e-9d2726b5f0d7" path="/var/lib/kubelet/pods/38d510f8-dde9-46b4-965e-9d2726b5f0d7/volumes" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.153719 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cdd56f2-da8a-4f2a-93d2-f141f003528c" path="/var/lib/kubelet/pods/4cdd56f2-da8a-4f2a-93d2-f141f003528c/volumes" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.156765 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e57e1cf-921d-4ed0-b5a3-e15f970366f5" path="/var/lib/kubelet/pods/8e57e1cf-921d-4ed0-b5a3-e15f970366f5/volumes" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.157618 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef96ca29-ba6e-42c7-b992-898fb5f7f7b5" path="/var/lib/kubelet/pods/ef96ca29-ba6e-42c7-b992-898fb5f7f7b5/volumes" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.236018 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-config\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.236136 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ed4886-b4df-49ee-916a-a1599baee4b1-config\") pod \"route-controller-manager-54c4dc7ccb-g8pg6\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.236177 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-serving-cert\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.236198 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ed4886-b4df-49ee-916a-a1599baee4b1-serving-cert\") pod \"route-controller-manager-54c4dc7ccb-g8pg6\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.236273 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-client-ca\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.236305 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njs8s\" (UniqueName: \"kubernetes.io/projected/01ed4886-b4df-49ee-916a-a1599baee4b1-kube-api-access-njs8s\") pod \"route-controller-manager-54c4dc7ccb-g8pg6\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.236329 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqxvb\" (UniqueName: \"kubernetes.io/projected/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-kube-api-access-dqxvb\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.236368 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ed4886-b4df-49ee-916a-a1599baee4b1-client-ca\") pod \"route-controller-manager-54c4dc7ccb-g8pg6\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.236394 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-proxy-ca-bundles\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.238188 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-client-ca\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.238217 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ed4886-b4df-49ee-916a-a1599baee4b1-client-ca\") pod \"route-controller-manager-54c4dc7ccb-g8pg6\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.238333 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ed4886-b4df-49ee-916a-a1599baee4b1-config\") pod \"route-controller-manager-54c4dc7ccb-g8pg6\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.239596 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-config\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.239816 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-proxy-ca-bundles\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.244337 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ed4886-b4df-49ee-916a-a1599baee4b1-serving-cert\") pod \"route-controller-manager-54c4dc7ccb-g8pg6\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.246164 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-serving-cert\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.259568 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqxvb\" (UniqueName: \"kubernetes.io/projected/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-kube-api-access-dqxvb\") pod \"controller-manager-bdf699b59-9kjxj\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.260768 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njs8s\" (UniqueName: \"kubernetes.io/projected/01ed4886-b4df-49ee-916a-a1599baee4b1-kube-api-access-njs8s\") pod \"route-controller-manager-54c4dc7ccb-g8pg6\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.352920 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:53 crc kubenswrapper[4820]: I0203 12:11:53.365604 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:54 crc kubenswrapper[4820]: I0203 12:11:54.338998 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bdf699b59-9kjxj"] Feb 03 12:11:54 crc kubenswrapper[4820]: I0203 12:11:54.593388 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6"] Feb 03 12:11:54 crc kubenswrapper[4820]: W0203 12:11:54.597872 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01ed4886_b4df_49ee_916a_a1599baee4b1.slice/crio-9cbb0f3f31a3f70f143753fdcc258a2fd721e6144ec887bf6587b42b71ce96f6 WatchSource:0}: Error finding container 9cbb0f3f31a3f70f143753fdcc258a2fd721e6144ec887bf6587b42b71ce96f6: Status 404 returned error can't find the container with id 9cbb0f3f31a3f70f143753fdcc258a2fd721e6144ec887bf6587b42b71ce96f6 Feb 03 12:11:55 crc kubenswrapper[4820]: I0203 12:11:55.205459 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" event={"ID":"01ed4886-b4df-49ee-916a-a1599baee4b1","Type":"ContainerStarted","Data":"6ba3b8affb410fcd3cc52e6aa678c738f0dcbd78c85c31f09e95a27891508cd1"} Feb 03 12:11:55 crc kubenswrapper[4820]: I0203 12:11:55.205523 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" event={"ID":"01ed4886-b4df-49ee-916a-a1599baee4b1","Type":"ContainerStarted","Data":"9cbb0f3f31a3f70f143753fdcc258a2fd721e6144ec887bf6587b42b71ce96f6"} Feb 03 12:11:55 crc kubenswrapper[4820]: I0203 12:11:55.205819 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:11:55 crc kubenswrapper[4820]: I0203 12:11:55.206636 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" event={"ID":"4420b566-5ea2-49ba-86f5-8f19b6ac98ab","Type":"ContainerStarted","Data":"9ba043e4a71e2cfa574ca0aedd7587a9a04891b7ced9af88ed2eb08278bfd906"} Feb 03 12:11:55 crc kubenswrapper[4820]: I0203 12:11:55.206661 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" event={"ID":"4420b566-5ea2-49ba-86f5-8f19b6ac98ab","Type":"ContainerStarted","Data":"22c75645bb5a67c179fdf4d4cb1d266c010525a0f1af06b090698c7df74e57a7"} Feb 03 12:11:55 crc kubenswrapper[4820]: I0203 12:11:55.206875 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:55 crc kubenswrapper[4820]: I0203 12:11:55.213190 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:11:55 crc kubenswrapper[4820]: I0203 12:11:55.246006 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" podStartSLOduration=4.245980184 podStartE2EDuration="4.245980184s" podCreationTimestamp="2026-02-03 12:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:11:55.244111766 +0000 UTC m=+432.767187640" watchObservedRunningTime="2026-02-03 12:11:55.245980184 +0000 UTC m=+432.769056048" Feb 03 12:11:55 crc kubenswrapper[4820]: I0203 12:11:55.276076 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" podStartSLOduration=4.276058567 podStartE2EDuration="4.276058567s" podCreationTimestamp="2026-02-03 12:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:11:55.268863121 +0000 UTC m=+432.791938995" watchObservedRunningTime="2026-02-03 12:11:55.276058567 +0000 UTC m=+432.799134441" Feb 03 12:11:56 crc kubenswrapper[4820]: I0203 12:11:56.239239 4820 patch_prober.go:28] interesting pod/route-controller-manager-54c4dc7ccb-g8pg6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:11:56 crc kubenswrapper[4820]: I0203 12:11:56.239535 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" podUID="01ed4886-b4df-49ee-916a-a1599baee4b1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:11:56 crc kubenswrapper[4820]: I0203 12:11:56.652256 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:12:01 crc kubenswrapper[4820]: I0203 12:12:01.551452 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:12:01 crc kubenswrapper[4820]: I0203 12:12:01.552559 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:12:06 crc kubenswrapper[4820]: I0203 12:12:06.595630 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" podUID="0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" containerName="oauth-openshift" containerID="cri-o://5a7388e8edbab65f12d970d3a037227056ce428065902d4729915f4a4898299b" gracePeriod=15 Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.594670 4820 generic.go:334] "Generic (PLEG): container finished" podID="0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" containerID="5a7388e8edbab65f12d970d3a037227056ce428065902d4729915f4a4898299b" exitCode=0 Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.594718 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" event={"ID":"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e","Type":"ContainerDied","Data":"5a7388e8edbab65f12d970d3a037227056ce428065902d4729915f4a4898299b"} Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.763504 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.811229 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66b89c787d-85mk9"] Feb 03 12:12:07 crc kubenswrapper[4820]: E0203 12:12:07.811550 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" containerName="oauth-openshift" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.811573 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" containerName="oauth-openshift" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.811770 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" containerName="oauth-openshift" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.812269 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.822394 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66b89c787d-85mk9"] Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.927373 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-error\") pod \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.927973 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-session\") pod \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.928110 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-idp-0-file-data\") pod \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.928160 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-serving-cert\") pod \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.928231 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-cliconfig\") pod \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.928374 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-trusted-ca-bundle\") pod \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.928414 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-audit-dir\") pod \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.928444 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-service-ca\") pod \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.928483 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-router-certs\") pod \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.928512 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" (UID: "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.928547 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc757\" (UniqueName: \"kubernetes.io/projected/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-kube-api-access-hc757\") pod \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.928650 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-login\") pod \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.928697 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-provider-selection\") pod \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.928752 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-ocp-branding-template\") pod \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.928808 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-audit-policies\") pod \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\" (UID: \"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e\") " Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929093 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" (UID: "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929120 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" (UID: "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929172 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" (UID: "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929249 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-session\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929314 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929365 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929413 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d558b47-1809-4483-bb1b-8b82036ebda8-audit-dir\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929453 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-service-ca\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929501 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-user-template-login\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929602 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929624 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-user-template-error\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929641 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929733 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7d558b47-1809-4483-bb1b-8b82036ebda8-audit-policies\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929760 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-router-certs\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929787 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929832 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntmw5\" (UniqueName: \"kubernetes.io/projected/7d558b47-1809-4483-bb1b-8b82036ebda8-kube-api-access-ntmw5\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929848 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929936 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929951 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929964 4820 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.929979 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.930510 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" (UID: "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.935875 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" (UID: "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.936186 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-kube-api-access-hc757" (OuterVolumeSpecName: "kube-api-access-hc757") pod "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" (UID: "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e"). InnerVolumeSpecName "kube-api-access-hc757". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.936493 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" (UID: "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.936632 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" (UID: "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.937673 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" (UID: "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.938259 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" (UID: "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.938958 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" (UID: "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.944425 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" (UID: "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:12:07 crc kubenswrapper[4820]: I0203 12:12:07.950542 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" (UID: "0e90d586-2aaf-4f58-acc2-eba29dc0cd2e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030335 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-user-template-error\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030397 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030423 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7d558b47-1809-4483-bb1b-8b82036ebda8-audit-policies\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030441 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-router-certs\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030458 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030484 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntmw5\" (UniqueName: \"kubernetes.io/projected/7d558b47-1809-4483-bb1b-8b82036ebda8-kube-api-access-ntmw5\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030500 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030533 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-session\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030558 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030587 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d558b47-1809-4483-bb1b-8b82036ebda8-audit-dir\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-service-ca\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030650 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-user-template-login\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030740 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d558b47-1809-4483-bb1b-8b82036ebda8-audit-dir\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030931 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030987 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.030999 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc757\" (UniqueName: \"kubernetes.io/projected/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-kube-api-access-hc757\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.031008 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.031019 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.031028 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.031037 4820 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.031046 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.031055 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.031064 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.031072 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.035243 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-service-ca\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.036324 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.037191 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.037336 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7d558b47-1809-4483-bb1b-8b82036ebda8-audit-policies\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.037792 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.038084 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.038366 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-router-certs\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.038382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.038593 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.039642 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-system-session\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.040813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-user-template-login\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.040813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7d558b47-1809-4483-bb1b-8b82036ebda8-v4-0-config-user-template-error\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.052044 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntmw5\" (UniqueName: \"kubernetes.io/projected/7d558b47-1809-4483-bb1b-8b82036ebda8-kube-api-access-ntmw5\") pod \"oauth-openshift-66b89c787d-85mk9\" (UID: \"7d558b47-1809-4483-bb1b-8b82036ebda8\") " pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.129682 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.573651 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66b89c787d-85mk9"] Feb 03 12:12:08 crc kubenswrapper[4820]: W0203 12:12:08.583355 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d558b47_1809_4483_bb1b_8b82036ebda8.slice/crio-3174bb5a84c8af98d93a3dcf743c42c2cc8bf6b105e7c6bce7149a0bc5e9d386 WatchSource:0}: Error finding container 3174bb5a84c8af98d93a3dcf743c42c2cc8bf6b105e7c6bce7149a0bc5e9d386: Status 404 returned error can't find the container with id 3174bb5a84c8af98d93a3dcf743c42c2cc8bf6b105e7c6bce7149a0bc5e9d386 Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.611708 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.611700 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4gskq" event={"ID":"0e90d586-2aaf-4f58-acc2-eba29dc0cd2e","Type":"ContainerDied","Data":"642123c19ffc9b7762d9d3d9fd39dc5b99e9f95a5a7d8f31ad0b9949f91e66f7"} Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.611842 4820 scope.go:117] "RemoveContainer" containerID="5a7388e8edbab65f12d970d3a037227056ce428065902d4729915f4a4898299b" Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.614374 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" event={"ID":"7d558b47-1809-4483-bb1b-8b82036ebda8","Type":"ContainerStarted","Data":"3174bb5a84c8af98d93a3dcf743c42c2cc8bf6b105e7c6bce7149a0bc5e9d386"} Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.646109 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4gskq"] Feb 03 12:12:08 crc kubenswrapper[4820]: I0203 12:12:08.656454 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4gskq"] Feb 03 12:12:09 crc kubenswrapper[4820]: I0203 12:12:09.156649 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e90d586-2aaf-4f58-acc2-eba29dc0cd2e" path="/var/lib/kubelet/pods/0e90d586-2aaf-4f58-acc2-eba29dc0cd2e/volumes" Feb 03 12:12:09 crc kubenswrapper[4820]: I0203 12:12:09.656400 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" event={"ID":"7d558b47-1809-4483-bb1b-8b82036ebda8","Type":"ContainerStarted","Data":"b2bd26e3ec8eded8dff67ec215319a0f8a5321488769e0f4cf402b09b235243b"} Feb 03 12:12:09 crc kubenswrapper[4820]: I0203 12:12:09.657748 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:09 crc kubenswrapper[4820]: I0203 12:12:09.666406 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" Feb 03 12:12:09 crc kubenswrapper[4820]: I0203 12:12:09.682560 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" podStartSLOduration=28.682546797 podStartE2EDuration="28.682546797s" podCreationTimestamp="2026-02-03 12:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:12:09.680873813 +0000 UTC m=+447.203949687" watchObservedRunningTime="2026-02-03 12:12:09.682546797 +0000 UTC m=+447.205622651" Feb 03 12:12:10 crc kubenswrapper[4820]: I0203 12:12:10.152404 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bdf699b59-9kjxj"] Feb 03 12:12:10 crc kubenswrapper[4820]: I0203 12:12:10.152643 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" podUID="4420b566-5ea2-49ba-86f5-8f19b6ac98ab" containerName="controller-manager" containerID="cri-o://9ba043e4a71e2cfa574ca0aedd7587a9a04891b7ced9af88ed2eb08278bfd906" gracePeriod=30 Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.006734 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6"] Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.006971 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" podUID="01ed4886-b4df-49ee-916a-a1599baee4b1" containerName="route-controller-manager" containerID="cri-o://6ba3b8affb410fcd3cc52e6aa678c738f0dcbd78c85c31f09e95a27891508cd1" gracePeriod=30 Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.578436 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.636637 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.720854 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-serving-cert\") pod \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.721052 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-client-ca\") pod \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.721095 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqxvb\" (UniqueName: \"kubernetes.io/projected/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-kube-api-access-dqxvb\") pod \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.721124 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-config\") pod \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.721152 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-proxy-ca-bundles\") pod \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\" (UID: \"4420b566-5ea2-49ba-86f5-8f19b6ac98ab\") " Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.722001 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-client-ca" (OuterVolumeSpecName: "client-ca") pod "4420b566-5ea2-49ba-86f5-8f19b6ac98ab" (UID: "4420b566-5ea2-49ba-86f5-8f19b6ac98ab"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.722015 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-config" (OuterVolumeSpecName: "config") pod "4420b566-5ea2-49ba-86f5-8f19b6ac98ab" (UID: "4420b566-5ea2-49ba-86f5-8f19b6ac98ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.722271 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4420b566-5ea2-49ba-86f5-8f19b6ac98ab" (UID: "4420b566-5ea2-49ba-86f5-8f19b6ac98ab"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.727102 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-kube-api-access-dqxvb" (OuterVolumeSpecName: "kube-api-access-dqxvb") pod "4420b566-5ea2-49ba-86f5-8f19b6ac98ab" (UID: "4420b566-5ea2-49ba-86f5-8f19b6ac98ab"). InnerVolumeSpecName "kube-api-access-dqxvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.727656 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4420b566-5ea2-49ba-86f5-8f19b6ac98ab" (UID: "4420b566-5ea2-49ba-86f5-8f19b6ac98ab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.822200 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ed4886-b4df-49ee-916a-a1599baee4b1-config\") pod \"01ed4886-b4df-49ee-916a-a1599baee4b1\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.822250 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ed4886-b4df-49ee-916a-a1599baee4b1-client-ca\") pod \"01ed4886-b4df-49ee-916a-a1599baee4b1\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.822362 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ed4886-b4df-49ee-916a-a1599baee4b1-serving-cert\") pod \"01ed4886-b4df-49ee-916a-a1599baee4b1\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.822395 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njs8s\" (UniqueName: \"kubernetes.io/projected/01ed4886-b4df-49ee-916a-a1599baee4b1-kube-api-access-njs8s\") pod \"01ed4886-b4df-49ee-916a-a1599baee4b1\" (UID: \"01ed4886-b4df-49ee-916a-a1599baee4b1\") " Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.822666 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.822684 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqxvb\" (UniqueName: \"kubernetes.io/projected/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-kube-api-access-dqxvb\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.822697 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.822709 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.822720 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4420b566-5ea2-49ba-86f5-8f19b6ac98ab-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.823197 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ed4886-b4df-49ee-916a-a1599baee4b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "01ed4886-b4df-49ee-916a-a1599baee4b1" (UID: "01ed4886-b4df-49ee-916a-a1599baee4b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.823252 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ed4886-b4df-49ee-916a-a1599baee4b1-config" (OuterVolumeSpecName: "config") pod "01ed4886-b4df-49ee-916a-a1599baee4b1" (UID: "01ed4886-b4df-49ee-916a-a1599baee4b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.825523 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ed4886-b4df-49ee-916a-a1599baee4b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ed4886-b4df-49ee-916a-a1599baee4b1" (UID: "01ed4886-b4df-49ee-916a-a1599baee4b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.825947 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ed4886-b4df-49ee-916a-a1599baee4b1-kube-api-access-njs8s" (OuterVolumeSpecName: "kube-api-access-njs8s") pod "01ed4886-b4df-49ee-916a-a1599baee4b1" (UID: "01ed4886-b4df-49ee-916a-a1599baee4b1"). InnerVolumeSpecName "kube-api-access-njs8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.924164 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ed4886-b4df-49ee-916a-a1599baee4b1-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.924227 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ed4886-b4df-49ee-916a-a1599baee4b1-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.924240 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ed4886-b4df-49ee-916a-a1599baee4b1-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.924253 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njs8s\" (UniqueName: \"kubernetes.io/projected/01ed4886-b4df-49ee-916a-a1599baee4b1-kube-api-access-njs8s\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.979834 4820 generic.go:334] "Generic (PLEG): container finished" podID="4420b566-5ea2-49ba-86f5-8f19b6ac98ab" containerID="9ba043e4a71e2cfa574ca0aedd7587a9a04891b7ced9af88ed2eb08278bfd906" exitCode=0 Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.979874 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.979927 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" event={"ID":"4420b566-5ea2-49ba-86f5-8f19b6ac98ab","Type":"ContainerDied","Data":"9ba043e4a71e2cfa574ca0aedd7587a9a04891b7ced9af88ed2eb08278bfd906"} Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.980019 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bdf699b59-9kjxj" event={"ID":"4420b566-5ea2-49ba-86f5-8f19b6ac98ab","Type":"ContainerDied","Data":"22c75645bb5a67c179fdf4d4cb1d266c010525a0f1af06b090698c7df74e57a7"} Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.980037 4820 scope.go:117] "RemoveContainer" containerID="9ba043e4a71e2cfa574ca0aedd7587a9a04891b7ced9af88ed2eb08278bfd906" Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.989060 4820 generic.go:334] "Generic (PLEG): container finished" podID="01ed4886-b4df-49ee-916a-a1599baee4b1" containerID="6ba3b8affb410fcd3cc52e6aa678c738f0dcbd78c85c31f09e95a27891508cd1" exitCode=0 Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.989298 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" event={"ID":"01ed4886-b4df-49ee-916a-a1599baee4b1","Type":"ContainerDied","Data":"6ba3b8affb410fcd3cc52e6aa678c738f0dcbd78c85c31f09e95a27891508cd1"} Feb 03 12:12:11 crc kubenswrapper[4820]: I0203 12:12:11.989382 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" event={"ID":"01ed4886-b4df-49ee-916a-a1599baee4b1","Type":"ContainerDied","Data":"9cbb0f3f31a3f70f143753fdcc258a2fd721e6144ec887bf6587b42b71ce96f6"} Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.000381 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.123396 4820 scope.go:117] "RemoveContainer" containerID="9ba043e4a71e2cfa574ca0aedd7587a9a04891b7ced9af88ed2eb08278bfd906" Feb 03 12:12:12 crc kubenswrapper[4820]: E0203 12:12:12.124263 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba043e4a71e2cfa574ca0aedd7587a9a04891b7ced9af88ed2eb08278bfd906\": container with ID starting with 9ba043e4a71e2cfa574ca0aedd7587a9a04891b7ced9af88ed2eb08278bfd906 not found: ID does not exist" containerID="9ba043e4a71e2cfa574ca0aedd7587a9a04891b7ced9af88ed2eb08278bfd906" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.124318 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba043e4a71e2cfa574ca0aedd7587a9a04891b7ced9af88ed2eb08278bfd906"} err="failed to get container status \"9ba043e4a71e2cfa574ca0aedd7587a9a04891b7ced9af88ed2eb08278bfd906\": rpc error: code = NotFound desc = could not find container \"9ba043e4a71e2cfa574ca0aedd7587a9a04891b7ced9af88ed2eb08278bfd906\": container with ID starting with 9ba043e4a71e2cfa574ca0aedd7587a9a04891b7ced9af88ed2eb08278bfd906 not found: ID does not exist" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.124349 4820 scope.go:117] "RemoveContainer" containerID="6ba3b8affb410fcd3cc52e6aa678c738f0dcbd78c85c31f09e95a27891508cd1" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.155258 4820 scope.go:117] "RemoveContainer" containerID="6ba3b8affb410fcd3cc52e6aa678c738f0dcbd78c85c31f09e95a27891508cd1" Feb 03 12:12:12 crc kubenswrapper[4820]: E0203 12:12:12.155770 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ba3b8affb410fcd3cc52e6aa678c738f0dcbd78c85c31f09e95a27891508cd1\": container with ID starting with 6ba3b8affb410fcd3cc52e6aa678c738f0dcbd78c85c31f09e95a27891508cd1 not found: ID does not exist" containerID="6ba3b8affb410fcd3cc52e6aa678c738f0dcbd78c85c31f09e95a27891508cd1" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.155810 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ba3b8affb410fcd3cc52e6aa678c738f0dcbd78c85c31f09e95a27891508cd1"} err="failed to get container status \"6ba3b8affb410fcd3cc52e6aa678c738f0dcbd78c85c31f09e95a27891508cd1\": rpc error: code = NotFound desc = could not find container \"6ba3b8affb410fcd3cc52e6aa678c738f0dcbd78c85c31f09e95a27891508cd1\": container with ID starting with 6ba3b8affb410fcd3cc52e6aa678c738f0dcbd78c85c31f09e95a27891508cd1 not found: ID does not exist" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.175073 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6"] Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.179427 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54c4dc7ccb-g8pg6"] Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.183135 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bdf699b59-9kjxj"] Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.187698 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bdf699b59-9kjxj"] Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.951135 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b8956944-vw228"] Feb 03 12:12:12 crc kubenswrapper[4820]: E0203 12:12:12.951645 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4420b566-5ea2-49ba-86f5-8f19b6ac98ab" containerName="controller-manager" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.951663 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4420b566-5ea2-49ba-86f5-8f19b6ac98ab" containerName="controller-manager" Feb 03 12:12:12 crc kubenswrapper[4820]: E0203 12:12:12.951682 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ed4886-b4df-49ee-916a-a1599baee4b1" containerName="route-controller-manager" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.951691 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ed4886-b4df-49ee-916a-a1599baee4b1" containerName="route-controller-manager" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.951807 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4420b566-5ea2-49ba-86f5-8f19b6ac98ab" containerName="controller-manager" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.951822 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ed4886-b4df-49ee-916a-a1599baee4b1" containerName="route-controller-manager" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.952368 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:12 crc kubenswrapper[4820]: W0203 12:12:12.956176 4820 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 03 12:12:12 crc kubenswrapper[4820]: E0203 12:12:12.956361 4820 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 12:12:12 crc kubenswrapper[4820]: W0203 12:12:12.956618 4820 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 03 12:12:12 crc kubenswrapper[4820]: E0203 12:12:12.956664 4820 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 12:12:12 crc kubenswrapper[4820]: W0203 12:12:12.957407 4820 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 03 12:12:12 crc kubenswrapper[4820]: E0203 12:12:12.957452 4820 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 12:12:12 crc kubenswrapper[4820]: W0203 12:12:12.957581 4820 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 03 12:12:12 crc kubenswrapper[4820]: E0203 12:12:12.957671 4820 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.959997 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.962779 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d64698f5d-qbpx5"] Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.963764 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.964004 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.969744 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.969789 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.969815 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.970016 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.970296 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.980736 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d64698f5d-qbpx5"] Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.981671 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.984352 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 12:12:12 crc kubenswrapper[4820]: I0203 12:12:12.989436 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b8956944-vw228"] Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.019422 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e13d076-a8f0-4a70-aa8e-671dd027e6da-serving-cert\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.019482 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35cf07e8-baa5-46c0-9226-22bdbcb2f569-serving-cert\") pod \"route-controller-manager-9b8956944-vw228\" (UID: \"35cf07e8-baa5-46c0-9226-22bdbcb2f569\") " pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.019522 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35cf07e8-baa5-46c0-9226-22bdbcb2f569-config\") pod \"route-controller-manager-9b8956944-vw228\" (UID: \"35cf07e8-baa5-46c0-9226-22bdbcb2f569\") " pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.019545 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35cf07e8-baa5-46c0-9226-22bdbcb2f569-client-ca\") pod \"route-controller-manager-9b8956944-vw228\" (UID: \"35cf07e8-baa5-46c0-9226-22bdbcb2f569\") " pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.019573 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e13d076-a8f0-4a70-aa8e-671dd027e6da-proxy-ca-bundles\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.019594 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdqdp\" (UniqueName: \"kubernetes.io/projected/35cf07e8-baa5-46c0-9226-22bdbcb2f569-kube-api-access-mdqdp\") pod \"route-controller-manager-9b8956944-vw228\" (UID: \"35cf07e8-baa5-46c0-9226-22bdbcb2f569\") " pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.019635 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e13d076-a8f0-4a70-aa8e-671dd027e6da-client-ca\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.019663 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2dpg\" (UniqueName: \"kubernetes.io/projected/8e13d076-a8f0-4a70-aa8e-671dd027e6da-kube-api-access-s2dpg\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.019704 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e13d076-a8f0-4a70-aa8e-671dd027e6da-config\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.121099 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dpg\" (UniqueName: \"kubernetes.io/projected/8e13d076-a8f0-4a70-aa8e-671dd027e6da-kube-api-access-s2dpg\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.121534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e13d076-a8f0-4a70-aa8e-671dd027e6da-config\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.122897 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e13d076-a8f0-4a70-aa8e-671dd027e6da-config\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.122975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e13d076-a8f0-4a70-aa8e-671dd027e6da-serving-cert\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.123005 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35cf07e8-baa5-46c0-9226-22bdbcb2f569-serving-cert\") pod \"route-controller-manager-9b8956944-vw228\" (UID: \"35cf07e8-baa5-46c0-9226-22bdbcb2f569\") " pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.123531 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35cf07e8-baa5-46c0-9226-22bdbcb2f569-config\") pod \"route-controller-manager-9b8956944-vw228\" (UID: \"35cf07e8-baa5-46c0-9226-22bdbcb2f569\") " pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.123559 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35cf07e8-baa5-46c0-9226-22bdbcb2f569-client-ca\") pod \"route-controller-manager-9b8956944-vw228\" (UID: \"35cf07e8-baa5-46c0-9226-22bdbcb2f569\") " pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.123578 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e13d076-a8f0-4a70-aa8e-671dd027e6da-proxy-ca-bundles\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.123597 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdqdp\" (UniqueName: \"kubernetes.io/projected/35cf07e8-baa5-46c0-9226-22bdbcb2f569-kube-api-access-mdqdp\") pod \"route-controller-manager-9b8956944-vw228\" (UID: \"35cf07e8-baa5-46c0-9226-22bdbcb2f569\") " pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.123628 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e13d076-a8f0-4a70-aa8e-671dd027e6da-client-ca\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.124285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e13d076-a8f0-4a70-aa8e-671dd027e6da-client-ca\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.124949 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e13d076-a8f0-4a70-aa8e-671dd027e6da-proxy-ca-bundles\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.127314 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e13d076-a8f0-4a70-aa8e-671dd027e6da-serving-cert\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.127399 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35cf07e8-baa5-46c0-9226-22bdbcb2f569-serving-cert\") pod \"route-controller-manager-9b8956944-vw228\" (UID: \"35cf07e8-baa5-46c0-9226-22bdbcb2f569\") " pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.145307 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dpg\" (UniqueName: \"kubernetes.io/projected/8e13d076-a8f0-4a70-aa8e-671dd027e6da-kube-api-access-s2dpg\") pod \"controller-manager-5d64698f5d-qbpx5\" (UID: \"8e13d076-a8f0-4a70-aa8e-671dd027e6da\") " pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.154057 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ed4886-b4df-49ee-916a-a1599baee4b1" path="/var/lib/kubelet/pods/01ed4886-b4df-49ee-916a-a1599baee4b1/volumes" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.154779 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4420b566-5ea2-49ba-86f5-8f19b6ac98ab" path="/var/lib/kubelet/pods/4420b566-5ea2-49ba-86f5-8f19b6ac98ab/volumes" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.285998 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:13 crc kubenswrapper[4820]: I0203 12:12:13.714150 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d64698f5d-qbpx5"] Feb 03 12:12:13 crc kubenswrapper[4820]: W0203 12:12:13.725403 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e13d076_a8f0_4a70_aa8e_671dd027e6da.slice/crio-69fe23e6ee4c8d3369a35a7b7cf17f2a9569f9ff20a7f4ebdb1bdeee724b781e WatchSource:0}: Error finding container 69fe23e6ee4c8d3369a35a7b7cf17f2a9569f9ff20a7f4ebdb1bdeee724b781e: Status 404 returned error can't find the container with id 69fe23e6ee4c8d3369a35a7b7cf17f2a9569f9ff20a7f4ebdb1bdeee724b781e Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.010822 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" event={"ID":"8e13d076-a8f0-4a70-aa8e-671dd027e6da","Type":"ContainerStarted","Data":"6abad4d8bce500b7df2c16ae6aacb4d88fb45ffca2bac5565a18ae62c73ce335"} Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.011122 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" event={"ID":"8e13d076-a8f0-4a70-aa8e-671dd027e6da","Type":"ContainerStarted","Data":"69fe23e6ee4c8d3369a35a7b7cf17f2a9569f9ff20a7f4ebdb1bdeee724b781e"} Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.011261 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.012733 4820 patch_prober.go:28] interesting pod/controller-manager-5d64698f5d-qbpx5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.012765 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" podUID="8e13d076-a8f0-4a70-aa8e-671dd027e6da" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.030279 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" podStartSLOduration=4.030262855 podStartE2EDuration="4.030262855s" podCreationTimestamp="2026-02-03 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:12:14.028992339 +0000 UTC m=+451.552068223" watchObservedRunningTime="2026-02-03 12:12:14.030262855 +0000 UTC m=+451.553338719" Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.089731 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.095202 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35cf07e8-baa5-46c0-9226-22bdbcb2f569-config\") pod \"route-controller-manager-9b8956944-vw228\" (UID: \"35cf07e8-baa5-46c0-9226-22bdbcb2f569\") " pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:14 crc kubenswrapper[4820]: E0203 12:12:14.124754 4820 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 03 12:12:14 crc kubenswrapper[4820]: E0203 12:12:14.124850 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/35cf07e8-baa5-46c0-9226-22bdbcb2f569-client-ca podName:35cf07e8-baa5-46c0-9226-22bdbcb2f569 nodeName:}" failed. No retries permitted until 2026-02-03 12:12:14.624829143 +0000 UTC m=+452.147905007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/35cf07e8-baa5-46c0-9226-22bdbcb2f569-client-ca") pod "route-controller-manager-9b8956944-vw228" (UID: "35cf07e8-baa5-46c0-9226-22bdbcb2f569") : failed to sync configmap cache: timed out waiting for the condition Feb 03 12:12:14 crc kubenswrapper[4820]: E0203 12:12:14.141787 4820 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 03 12:12:14 crc kubenswrapper[4820]: E0203 12:12:14.141826 4820 projected.go:194] Error preparing data for projected volume kube-api-access-mdqdp for pod openshift-route-controller-manager/route-controller-manager-9b8956944-vw228: failed to sync configmap cache: timed out waiting for the condition Feb 03 12:12:14 crc kubenswrapper[4820]: E0203 12:12:14.141920 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35cf07e8-baa5-46c0-9226-22bdbcb2f569-kube-api-access-mdqdp podName:35cf07e8-baa5-46c0-9226-22bdbcb2f569 nodeName:}" failed. No retries permitted until 2026-02-03 12:12:14.641882721 +0000 UTC m=+452.164958585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mdqdp" (UniqueName: "kubernetes.io/projected/35cf07e8-baa5-46c0-9226-22bdbcb2f569-kube-api-access-mdqdp") pod "route-controller-manager-9b8956944-vw228" (UID: "35cf07e8-baa5-46c0-9226-22bdbcb2f569") : failed to sync configmap cache: timed out waiting for the condition Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.215001 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.276005 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.349003 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.641433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35cf07e8-baa5-46c0-9226-22bdbcb2f569-client-ca\") pod \"route-controller-manager-9b8956944-vw228\" (UID: \"35cf07e8-baa5-46c0-9226-22bdbcb2f569\") " pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.642266 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35cf07e8-baa5-46c0-9226-22bdbcb2f569-client-ca\") pod \"route-controller-manager-9b8956944-vw228\" (UID: \"35cf07e8-baa5-46c0-9226-22bdbcb2f569\") " pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.742309 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdqdp\" (UniqueName: \"kubernetes.io/projected/35cf07e8-baa5-46c0-9226-22bdbcb2f569-kube-api-access-mdqdp\") pod \"route-controller-manager-9b8956944-vw228\" (UID: \"35cf07e8-baa5-46c0-9226-22bdbcb2f569\") " pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.748982 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdqdp\" (UniqueName: \"kubernetes.io/projected/35cf07e8-baa5-46c0-9226-22bdbcb2f569-kube-api-access-mdqdp\") pod \"route-controller-manager-9b8956944-vw228\" (UID: \"35cf07e8-baa5-46c0-9226-22bdbcb2f569\") " pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:14 crc kubenswrapper[4820]: I0203 12:12:14.771077 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:15 crc kubenswrapper[4820]: I0203 12:12:15.020255 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d64698f5d-qbpx5" Feb 03 12:12:15 crc kubenswrapper[4820]: I0203 12:12:15.258135 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b8956944-vw228"] Feb 03 12:12:16 crc kubenswrapper[4820]: I0203 12:12:16.024195 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" event={"ID":"35cf07e8-baa5-46c0-9226-22bdbcb2f569","Type":"ContainerStarted","Data":"23128f866cad40b5bee6f8e2cdfe1b9b53449c0551356fc9128762b69cd6d698"} Feb 03 12:12:16 crc kubenswrapper[4820]: I0203 12:12:16.024762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" event={"ID":"35cf07e8-baa5-46c0-9226-22bdbcb2f569","Type":"ContainerStarted","Data":"9f41ec6b6a5a997f24e23ceec164b3798378b694f33ea1befc6a67fce2bc2f86"} Feb 03 12:12:16 crc kubenswrapper[4820]: I0203 12:12:16.052324 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" podStartSLOduration=5.052291131 podStartE2EDuration="5.052291131s" podCreationTimestamp="2026-02-03 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:12:16.042042311 +0000 UTC m=+453.565118175" watchObservedRunningTime="2026-02-03 12:12:16.052291131 +0000 UTC m=+453.575366995" Feb 03 12:12:17 crc kubenswrapper[4820]: I0203 12:12:17.029292 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:17 crc kubenswrapper[4820]: I0203 12:12:17.034380 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" Feb 03 12:12:31 crc kubenswrapper[4820]: I0203 12:12:31.365126 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:12:31 crc kubenswrapper[4820]: I0203 12:12:31.366004 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:12:31 crc kubenswrapper[4820]: I0203 12:12:31.366715 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:12:31 crc kubenswrapper[4820]: I0203 12:12:31.367549 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e6a324869d2f58d634802d3f06668e5da2b1da1808292287787329971cfd4aa"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:12:31 crc kubenswrapper[4820]: I0203 12:12:31.367629 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://4e6a324869d2f58d634802d3f06668e5da2b1da1808292287787329971cfd4aa" gracePeriod=600 Feb 03 12:12:32 crc kubenswrapper[4820]: I0203 12:12:32.184721 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="4e6a324869d2f58d634802d3f06668e5da2b1da1808292287787329971cfd4aa" exitCode=0 Feb 03 12:12:32 crc kubenswrapper[4820]: I0203 12:12:32.184834 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"4e6a324869d2f58d634802d3f06668e5da2b1da1808292287787329971cfd4aa"} Feb 03 12:12:32 crc kubenswrapper[4820]: I0203 12:12:32.185395 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"856f08893c1ddb14ce7ea228b3d8908439ab8ab4b376483bac3f27a346ac49e0"} Feb 03 12:12:32 crc kubenswrapper[4820]: I0203 12:12:32.185427 4820 scope.go:117] "RemoveContainer" containerID="7e3612cc50c75efc4721ade62e3caf8a6cbcb41c49183b50da3639d7fce97b9d" Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.453226 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dt8ch"] Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.457120 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dt8ch" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" containerName="registry-server" containerID="cri-o://bcdc8760bc1bc68380a3300773bc16d7c49530853d2ccff845beffc0b30b51ff" gracePeriod=30 Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.461410 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5vfzj"] Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.461718 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5vfzj" podUID="829fef9f-938d-4d61-9584-bf061063c952" containerName="registry-server" containerID="cri-o://6082b020c5d798741abb1c8e79f0e32b6622898883ade6085aa745b9601b3b45" gracePeriod=30 Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.486772 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9w662"] Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.487081 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" containerID="cri-o://280f023759f9ef7ae8dddf1f214830aff16da4836086e4cee77b773efd3b347b" gracePeriod=30 Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.491518 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvpt2"] Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.491877 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dvpt2" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" containerName="registry-server" containerID="cri-o://f5f00dc439199ae0966e48a82dd93983c914990d5c9d5fc70ddd207e282b1aa9" gracePeriod=30 Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.508814 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zrlrv"] Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.509148 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zrlrv" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" containerName="registry-server" containerID="cri-o://46bffd8733841c34dde692c1bb14efc701beac022c68cd732fbfcf87846086e0" gracePeriod=30 Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.517397 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qr29p"] Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.518353 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.527875 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qr29p"] Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.642682 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qr29p\" (UID: \"5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.643122 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmjvd\" (UniqueName: \"kubernetes.io/projected/5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738-kube-api-access-cmjvd\") pod \"marketplace-operator-79b997595-qr29p\" (UID: \"5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.643158 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qr29p\" (UID: \"5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.744684 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmjvd\" (UniqueName: \"kubernetes.io/projected/5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738-kube-api-access-cmjvd\") pod \"marketplace-operator-79b997595-qr29p\" (UID: \"5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.744768 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qr29p\" (UID: \"5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.744811 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qr29p\" (UID: \"5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.747655 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qr29p\" (UID: \"5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.757911 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qr29p\" (UID: \"5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.764444 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmjvd\" (UniqueName: \"kubernetes.io/projected/5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738-kube-api-access-cmjvd\") pod \"marketplace-operator-79b997595-qr29p\" (UID: \"5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" Feb 03 12:12:37 crc kubenswrapper[4820]: I0203 12:12:37.841468 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.056402 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.233259 4820 generic.go:334] "Generic (PLEG): container finished" podID="682f83dc-ba7f-474f-89d2-6effbcf2806b" containerID="bcdc8760bc1bc68380a3300773bc16d7c49530853d2ccff845beffc0b30b51ff" exitCode=0 Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.233364 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dt8ch" event={"ID":"682f83dc-ba7f-474f-89d2-6effbcf2806b","Type":"ContainerDied","Data":"bcdc8760bc1bc68380a3300773bc16d7c49530853d2ccff845beffc0b30b51ff"} Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.233398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dt8ch" event={"ID":"682f83dc-ba7f-474f-89d2-6effbcf2806b","Type":"ContainerDied","Data":"bd95936fafbe8cf887bbb9830a4eda6a1883bf43050fffa7f5caf5768449204b"} Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.233420 4820 scope.go:117] "RemoveContainer" containerID="bcdc8760bc1bc68380a3300773bc16d7c49530853d2ccff845beffc0b30b51ff" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.233568 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dt8ch" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.239407 4820 generic.go:334] "Generic (PLEG): container finished" podID="829fef9f-938d-4d61-9584-bf061063c952" containerID="6082b020c5d798741abb1c8e79f0e32b6622898883ade6085aa745b9601b3b45" exitCode=0 Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.239513 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vfzj" event={"ID":"829fef9f-938d-4d61-9584-bf061063c952","Type":"ContainerDied","Data":"6082b020c5d798741abb1c8e79f0e32b6622898883ade6085aa745b9601b3b45"} Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.251291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/682f83dc-ba7f-474f-89d2-6effbcf2806b-utilities\") pod \"682f83dc-ba7f-474f-89d2-6effbcf2806b\" (UID: \"682f83dc-ba7f-474f-89d2-6effbcf2806b\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.251458 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/682f83dc-ba7f-474f-89d2-6effbcf2806b-catalog-content\") pod \"682f83dc-ba7f-474f-89d2-6effbcf2806b\" (UID: \"682f83dc-ba7f-474f-89d2-6effbcf2806b\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.251492 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6rdz\" (UniqueName: \"kubernetes.io/projected/682f83dc-ba7f-474f-89d2-6effbcf2806b-kube-api-access-k6rdz\") pod \"682f83dc-ba7f-474f-89d2-6effbcf2806b\" (UID: \"682f83dc-ba7f-474f-89d2-6effbcf2806b\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.252675 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/682f83dc-ba7f-474f-89d2-6effbcf2806b-utilities" (OuterVolumeSpecName: "utilities") pod "682f83dc-ba7f-474f-89d2-6effbcf2806b" (UID: "682f83dc-ba7f-474f-89d2-6effbcf2806b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.252994 4820 generic.go:334] "Generic (PLEG): container finished" podID="6fdd485f-526a-4367-ba6d-b68246ed45a0" containerID="f5f00dc439199ae0966e48a82dd93983c914990d5c9d5fc70ddd207e282b1aa9" exitCode=0 Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.253080 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpt2" event={"ID":"6fdd485f-526a-4367-ba6d-b68246ed45a0","Type":"ContainerDied","Data":"f5f00dc439199ae0966e48a82dd93983c914990d5c9d5fc70ddd207e282b1aa9"} Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.257755 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/682f83dc-ba7f-474f-89d2-6effbcf2806b-kube-api-access-k6rdz" (OuterVolumeSpecName: "kube-api-access-k6rdz") pod "682f83dc-ba7f-474f-89d2-6effbcf2806b" (UID: "682f83dc-ba7f-474f-89d2-6effbcf2806b"). InnerVolumeSpecName "kube-api-access-k6rdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.258872 4820 generic.go:334] "Generic (PLEG): container finished" podID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerID="280f023759f9ef7ae8dddf1f214830aff16da4836086e4cee77b773efd3b347b" exitCode=0 Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.259086 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" event={"ID":"92dde085-8a2b-4c9f-947f-441ea67b8622","Type":"ContainerDied","Data":"280f023759f9ef7ae8dddf1f214830aff16da4836086e4cee77b773efd3b347b"} Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.262788 4820 generic.go:334] "Generic (PLEG): container finished" podID="030d5842-d0b7-4e4f-ad63-58848630a1ca" containerID="46bffd8733841c34dde692c1bb14efc701beac022c68cd732fbfcf87846086e0" exitCode=0 Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.262824 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrlrv" event={"ID":"030d5842-d0b7-4e4f-ad63-58848630a1ca","Type":"ContainerDied","Data":"46bffd8733841c34dde692c1bb14efc701beac022c68cd732fbfcf87846086e0"} Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.265411 4820 scope.go:117] "RemoveContainer" containerID="e8fbfe00d2ccdfdefdb859087592fbc79365fb5c290acc49e933b2513079ee01" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.285914 4820 scope.go:117] "RemoveContainer" containerID="5a2d00a8406b2f79c8b348d38148cdc25692c7c9d425d6f81bbcddd0551302fc" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.302185 4820 scope.go:117] "RemoveContainer" containerID="bcdc8760bc1bc68380a3300773bc16d7c49530853d2ccff845beffc0b30b51ff" Feb 03 12:12:38 crc kubenswrapper[4820]: E0203 12:12:38.302787 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcdc8760bc1bc68380a3300773bc16d7c49530853d2ccff845beffc0b30b51ff\": container with ID starting with bcdc8760bc1bc68380a3300773bc16d7c49530853d2ccff845beffc0b30b51ff not found: ID does not exist" containerID="bcdc8760bc1bc68380a3300773bc16d7c49530853d2ccff845beffc0b30b51ff" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.302913 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcdc8760bc1bc68380a3300773bc16d7c49530853d2ccff845beffc0b30b51ff"} err="failed to get container status \"bcdc8760bc1bc68380a3300773bc16d7c49530853d2ccff845beffc0b30b51ff\": rpc error: code = NotFound desc = could not find container \"bcdc8760bc1bc68380a3300773bc16d7c49530853d2ccff845beffc0b30b51ff\": container with ID starting with bcdc8760bc1bc68380a3300773bc16d7c49530853d2ccff845beffc0b30b51ff not found: ID does not exist" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.303023 4820 scope.go:117] "RemoveContainer" containerID="e8fbfe00d2ccdfdefdb859087592fbc79365fb5c290acc49e933b2513079ee01" Feb 03 12:12:38 crc kubenswrapper[4820]: E0203 12:12:38.303838 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8fbfe00d2ccdfdefdb859087592fbc79365fb5c290acc49e933b2513079ee01\": container with ID starting with e8fbfe00d2ccdfdefdb859087592fbc79365fb5c290acc49e933b2513079ee01 not found: ID does not exist" containerID="e8fbfe00d2ccdfdefdb859087592fbc79365fb5c290acc49e933b2513079ee01" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.303932 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8fbfe00d2ccdfdefdb859087592fbc79365fb5c290acc49e933b2513079ee01"} err="failed to get container status \"e8fbfe00d2ccdfdefdb859087592fbc79365fb5c290acc49e933b2513079ee01\": rpc error: code = NotFound desc = could not find container \"e8fbfe00d2ccdfdefdb859087592fbc79365fb5c290acc49e933b2513079ee01\": container with ID starting with e8fbfe00d2ccdfdefdb859087592fbc79365fb5c290acc49e933b2513079ee01 not found: ID does not exist" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.303965 4820 scope.go:117] "RemoveContainer" containerID="5a2d00a8406b2f79c8b348d38148cdc25692c7c9d425d6f81bbcddd0551302fc" Feb 03 12:12:38 crc kubenswrapper[4820]: E0203 12:12:38.304380 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a2d00a8406b2f79c8b348d38148cdc25692c7c9d425d6f81bbcddd0551302fc\": container with ID starting with 5a2d00a8406b2f79c8b348d38148cdc25692c7c9d425d6f81bbcddd0551302fc not found: ID does not exist" containerID="5a2d00a8406b2f79c8b348d38148cdc25692c7c9d425d6f81bbcddd0551302fc" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.304478 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a2d00a8406b2f79c8b348d38148cdc25692c7c9d425d6f81bbcddd0551302fc"} err="failed to get container status \"5a2d00a8406b2f79c8b348d38148cdc25692c7c9d425d6f81bbcddd0551302fc\": rpc error: code = NotFound desc = could not find container \"5a2d00a8406b2f79c8b348d38148cdc25692c7c9d425d6f81bbcddd0551302fc\": container with ID starting with 5a2d00a8406b2f79c8b348d38148cdc25692c7c9d425d6f81bbcddd0551302fc not found: ID does not exist" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.304571 4820 scope.go:117] "RemoveContainer" containerID="67b885316ec9f5e784fc1adc076ae1f874aad7366377cb7270df56b6acafe0e1" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.323128 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/682f83dc-ba7f-474f-89d2-6effbcf2806b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "682f83dc-ba7f-474f-89d2-6effbcf2806b" (UID: "682f83dc-ba7f-474f-89d2-6effbcf2806b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.357022 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/682f83dc-ba7f-474f-89d2-6effbcf2806b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.357066 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6rdz\" (UniqueName: \"kubernetes.io/projected/682f83dc-ba7f-474f-89d2-6effbcf2806b-kube-api-access-k6rdz\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.357079 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/682f83dc-ba7f-474f-89d2-6effbcf2806b-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.368090 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.373794 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.394213 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.434268 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.525703 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qr29p"] Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.560017 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fdd485f-526a-4367-ba6d-b68246ed45a0-catalog-content\") pod \"6fdd485f-526a-4367-ba6d-b68246ed45a0\" (UID: \"6fdd485f-526a-4367-ba6d-b68246ed45a0\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.560113 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcfzd\" (UniqueName: \"kubernetes.io/projected/829fef9f-938d-4d61-9584-bf061063c952-kube-api-access-mcfzd\") pod \"829fef9f-938d-4d61-9584-bf061063c952\" (UID: \"829fef9f-938d-4d61-9584-bf061063c952\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.560137 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030d5842-d0b7-4e4f-ad63-58848630a1ca-catalog-content\") pod \"030d5842-d0b7-4e4f-ad63-58848630a1ca\" (UID: \"030d5842-d0b7-4e4f-ad63-58848630a1ca\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.560492 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92dde085-8a2b-4c9f-947f-441ea67b8622-marketplace-trusted-ca\") pod \"92dde085-8a2b-4c9f-947f-441ea67b8622\" (UID: \"92dde085-8a2b-4c9f-947f-441ea67b8622\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.562607 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5jlx\" (UniqueName: \"kubernetes.io/projected/92dde085-8a2b-4c9f-947f-441ea67b8622-kube-api-access-q5jlx\") pod \"92dde085-8a2b-4c9f-947f-441ea67b8622\" (UID: \"92dde085-8a2b-4c9f-947f-441ea67b8622\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.562671 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb8lb\" (UniqueName: \"kubernetes.io/projected/6fdd485f-526a-4367-ba6d-b68246ed45a0-kube-api-access-kb8lb\") pod \"6fdd485f-526a-4367-ba6d-b68246ed45a0\" (UID: \"6fdd485f-526a-4367-ba6d-b68246ed45a0\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.562692 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/92dde085-8a2b-4c9f-947f-441ea67b8622-marketplace-operator-metrics\") pod \"92dde085-8a2b-4c9f-947f-441ea67b8622\" (UID: \"92dde085-8a2b-4c9f-947f-441ea67b8622\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.562748 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030d5842-d0b7-4e4f-ad63-58848630a1ca-utilities\") pod \"030d5842-d0b7-4e4f-ad63-58848630a1ca\" (UID: \"030d5842-d0b7-4e4f-ad63-58848630a1ca\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.564161 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829fef9f-938d-4d61-9584-bf061063c952-catalog-content\") pod \"829fef9f-938d-4d61-9584-bf061063c952\" (UID: \"829fef9f-938d-4d61-9584-bf061063c952\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.564199 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fdd485f-526a-4367-ba6d-b68246ed45a0-utilities\") pod \"6fdd485f-526a-4367-ba6d-b68246ed45a0\" (UID: \"6fdd485f-526a-4367-ba6d-b68246ed45a0\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.564272 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2psp\" (UniqueName: \"kubernetes.io/projected/030d5842-d0b7-4e4f-ad63-58848630a1ca-kube-api-access-q2psp\") pod \"030d5842-d0b7-4e4f-ad63-58848630a1ca\" (UID: \"030d5842-d0b7-4e4f-ad63-58848630a1ca\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.564344 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829fef9f-938d-4d61-9584-bf061063c952-utilities\") pod \"829fef9f-938d-4d61-9584-bf061063c952\" (UID: \"829fef9f-938d-4d61-9584-bf061063c952\") " Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.564459 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829fef9f-938d-4d61-9584-bf061063c952-kube-api-access-mcfzd" (OuterVolumeSpecName: "kube-api-access-mcfzd") pod "829fef9f-938d-4d61-9584-bf061063c952" (UID: "829fef9f-938d-4d61-9584-bf061063c952"). InnerVolumeSpecName "kube-api-access-mcfzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.564832 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcfzd\" (UniqueName: \"kubernetes.io/projected/829fef9f-938d-4d61-9584-bf061063c952-kube-api-access-mcfzd\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.565786 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fdd485f-526a-4367-ba6d-b68246ed45a0-utilities" (OuterVolumeSpecName: "utilities") pod "6fdd485f-526a-4367-ba6d-b68246ed45a0" (UID: "6fdd485f-526a-4367-ba6d-b68246ed45a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.567089 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dde085-8a2b-4c9f-947f-441ea67b8622-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "92dde085-8a2b-4c9f-947f-441ea67b8622" (UID: "92dde085-8a2b-4c9f-947f-441ea67b8622"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.569541 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/030d5842-d0b7-4e4f-ad63-58848630a1ca-kube-api-access-q2psp" (OuterVolumeSpecName: "kube-api-access-q2psp") pod "030d5842-d0b7-4e4f-ad63-58848630a1ca" (UID: "030d5842-d0b7-4e4f-ad63-58848630a1ca"). InnerVolumeSpecName "kube-api-access-q2psp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.569661 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829fef9f-938d-4d61-9584-bf061063c952-utilities" (OuterVolumeSpecName: "utilities") pod "829fef9f-938d-4d61-9584-bf061063c952" (UID: "829fef9f-938d-4d61-9584-bf061063c952"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.572287 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dt8ch"] Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.572303 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/030d5842-d0b7-4e4f-ad63-58848630a1ca-utilities" (OuterVolumeSpecName: "utilities") pod "030d5842-d0b7-4e4f-ad63-58848630a1ca" (UID: "030d5842-d0b7-4e4f-ad63-58848630a1ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.573599 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dde085-8a2b-4c9f-947f-441ea67b8622-kube-api-access-q5jlx" (OuterVolumeSpecName: "kube-api-access-q5jlx") pod "92dde085-8a2b-4c9f-947f-441ea67b8622" (UID: "92dde085-8a2b-4c9f-947f-441ea67b8622"). InnerVolumeSpecName "kube-api-access-q5jlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.575537 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dde085-8a2b-4c9f-947f-441ea67b8622-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "92dde085-8a2b-4c9f-947f-441ea67b8622" (UID: "92dde085-8a2b-4c9f-947f-441ea67b8622"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.576611 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fdd485f-526a-4367-ba6d-b68246ed45a0-kube-api-access-kb8lb" (OuterVolumeSpecName: "kube-api-access-kb8lb") pod "6fdd485f-526a-4367-ba6d-b68246ed45a0" (UID: "6fdd485f-526a-4367-ba6d-b68246ed45a0"). InnerVolumeSpecName "kube-api-access-kb8lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.576808 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dt8ch"] Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.588426 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fdd485f-526a-4367-ba6d-b68246ed45a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fdd485f-526a-4367-ba6d-b68246ed45a0" (UID: "6fdd485f-526a-4367-ba6d-b68246ed45a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.631543 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829fef9f-938d-4d61-9584-bf061063c952-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "829fef9f-938d-4d61-9584-bf061063c952" (UID: "829fef9f-938d-4d61-9584-bf061063c952"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.666325 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb8lb\" (UniqueName: \"kubernetes.io/projected/6fdd485f-526a-4367-ba6d-b68246ed45a0-kube-api-access-kb8lb\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.666354 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/92dde085-8a2b-4c9f-947f-441ea67b8622-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.666366 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030d5842-d0b7-4e4f-ad63-58848630a1ca-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.666375 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829fef9f-938d-4d61-9584-bf061063c952-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.666383 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fdd485f-526a-4367-ba6d-b68246ed45a0-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.666391 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2psp\" (UniqueName: \"kubernetes.io/projected/030d5842-d0b7-4e4f-ad63-58848630a1ca-kube-api-access-q2psp\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.666399 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829fef9f-938d-4d61-9584-bf061063c952-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.666407 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fdd485f-526a-4367-ba6d-b68246ed45a0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.666415 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92dde085-8a2b-4c9f-947f-441ea67b8622-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.666425 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5jlx\" (UniqueName: \"kubernetes.io/projected/92dde085-8a2b-4c9f-947f-441ea67b8622-kube-api-access-q5jlx\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.690407 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/030d5842-d0b7-4e4f-ad63-58848630a1ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "030d5842-d0b7-4e4f-ad63-58848630a1ca" (UID: "030d5842-d0b7-4e4f-ad63-58848630a1ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:12:38 crc kubenswrapper[4820]: I0203 12:12:38.767518 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030d5842-d0b7-4e4f-ad63-58848630a1ca-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.156195 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" path="/var/lib/kubelet/pods/682f83dc-ba7f-474f-89d2-6effbcf2806b/volumes" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.270413 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vfzj" event={"ID":"829fef9f-938d-4d61-9584-bf061063c952","Type":"ContainerDied","Data":"38dccf4ebc2636cc29ae4e2a18f71dd56137163ecf92d1e6d034a31a54c75c28"} Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.270450 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5vfzj" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.270477 4820 scope.go:117] "RemoveContainer" containerID="6082b020c5d798741abb1c8e79f0e32b6622898883ade6085aa745b9601b3b45" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.272701 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" event={"ID":"5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738","Type":"ContainerStarted","Data":"5299de6ffeda22645b4f16532597739009b3aa90e1da1b12d0aa50d0baba88dd"} Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.272771 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" event={"ID":"5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738","Type":"ContainerStarted","Data":"44e710b50d99aece250c17d3a6bfe5d6946acb9464367b7fcb2f99f9c2ed2e6c"} Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.273959 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.279318 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvpt2" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.279343 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvpt2" event={"ID":"6fdd485f-526a-4367-ba6d-b68246ed45a0","Type":"ContainerDied","Data":"bf16f19f08ceb1ca1518d245994ff951a0e49f37e97fc6fd0a5bb7bda1d4e464"} Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.280710 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.281904 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" event={"ID":"92dde085-8a2b-4c9f-947f-441ea67b8622","Type":"ContainerDied","Data":"d2b318319736f3cb7e1c73b7e726851ac180e32612c04089e75bdbb329118fc5"} Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.282006 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9w662" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.284134 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrlrv" event={"ID":"030d5842-d0b7-4e4f-ad63-58848630a1ca","Type":"ContainerDied","Data":"dd359c458be9926dd64a069d1121401a3026fe5f28e3a5c126f9e5685dd8a4b6"} Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.284144 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrlrv" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.288192 4820 scope.go:117] "RemoveContainer" containerID="4ae1535472b9e25c48a491b772148ec9aa3f2ffbfa3bf03f701721a0bdb7d923" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.298460 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" podStartSLOduration=2.298440709 podStartE2EDuration="2.298440709s" podCreationTimestamp="2026-02-03 12:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:12:39.297671656 +0000 UTC m=+476.820747530" watchObservedRunningTime="2026-02-03 12:12:39.298440709 +0000 UTC m=+476.821516573" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.318491 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5vfzj"] Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.318762 4820 scope.go:117] "RemoveContainer" containerID="3b100c1b0f5145a074505dbf8afd2a2cea65699c06772d0ff0c8b909c797f3f7" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.324512 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5vfzj"] Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.335746 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9w662"] Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.335813 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9w662"] Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.339635 4820 scope.go:117] "RemoveContainer" containerID="f5f00dc439199ae0966e48a82dd93983c914990d5c9d5fc70ddd207e282b1aa9" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.350461 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zrlrv"] Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.354827 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zrlrv"] Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.359880 4820 scope.go:117] "RemoveContainer" containerID="08e06c7932f94ab3ab1e5b0ff1ab752e934934e65f4e865fc4e0f662dc6117b1" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.382577 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvpt2"] Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.386198 4820 scope.go:117] "RemoveContainer" containerID="2986f7a877dac401806618562c0b2b90cdd4bf46c6974ea4cf74892f8a8f2989" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.389022 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvpt2"] Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.405721 4820 scope.go:117] "RemoveContainer" containerID="280f023759f9ef7ae8dddf1f214830aff16da4836086e4cee77b773efd3b347b" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.421559 4820 scope.go:117] "RemoveContainer" containerID="46bffd8733841c34dde692c1bb14efc701beac022c68cd732fbfcf87846086e0" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.435986 4820 scope.go:117] "RemoveContainer" containerID="effafa3bb8851cb0fcb76799b62176931f6658f87961f4c27d50530cb7486ee7" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.478076 4820 scope.go:117] "RemoveContainer" containerID="f1f07f57affb00faa6b0fdf3f3962aad642e5b75c0bf957cc1da28152c8fbf2b" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.874715 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wngqc"] Feb 03 12:12:39 crc kubenswrapper[4820]: E0203 12:12:39.876670 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" containerName="extract-content" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.876782 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" containerName="extract-content" Feb 03 12:12:39 crc kubenswrapper[4820]: E0203 12:12:39.876933 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.877033 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" Feb 03 12:12:39 crc kubenswrapper[4820]: E0203 12:12:39.877127 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829fef9f-938d-4d61-9584-bf061063c952" containerName="registry-server" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.877223 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="829fef9f-938d-4d61-9584-bf061063c952" containerName="registry-server" Feb 03 12:12:39 crc kubenswrapper[4820]: E0203 12:12:39.878021 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" containerName="extract-utilities" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.879945 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" containerName="extract-utilities" Feb 03 12:12:39 crc kubenswrapper[4820]: E0203 12:12:39.880079 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.880170 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" Feb 03 12:12:39 crc kubenswrapper[4820]: E0203 12:12:39.880252 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829fef9f-938d-4d61-9584-bf061063c952" containerName="extract-content" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.880319 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="829fef9f-938d-4d61-9584-bf061063c952" containerName="extract-content" Feb 03 12:12:39 crc kubenswrapper[4820]: E0203 12:12:39.880384 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" containerName="registry-server" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.880440 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" containerName="registry-server" Feb 03 12:12:39 crc kubenswrapper[4820]: E0203 12:12:39.880495 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" containerName="registry-server" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.880564 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" containerName="registry-server" Feb 03 12:12:39 crc kubenswrapper[4820]: E0203 12:12:39.880630 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" containerName="extract-content" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.880685 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" containerName="extract-content" Feb 03 12:12:39 crc kubenswrapper[4820]: E0203 12:12:39.880760 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" containerName="extract-content" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.880822 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" containerName="extract-content" Feb 03 12:12:39 crc kubenswrapper[4820]: E0203 12:12:39.880927 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829fef9f-938d-4d61-9584-bf061063c952" containerName="extract-utilities" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.881003 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="829fef9f-938d-4d61-9584-bf061063c952" containerName="extract-utilities" Feb 03 12:12:39 crc kubenswrapper[4820]: E0203 12:12:39.881073 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" containerName="extract-utilities" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.881221 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" containerName="extract-utilities" Feb 03 12:12:39 crc kubenswrapper[4820]: E0203 12:12:39.881291 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" containerName="extract-utilities" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.881353 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" containerName="extract-utilities" Feb 03 12:12:39 crc kubenswrapper[4820]: E0203 12:12:39.881532 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" containerName="registry-server" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.881638 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" containerName="registry-server" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.881851 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.881927 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" containerName="marketplace-operator" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.881989 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" containerName="registry-server" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.882071 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="829fef9f-938d-4d61-9584-bf061063c952" containerName="registry-server" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.882152 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" containerName="registry-server" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.882213 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="682f83dc-ba7f-474f-89d2-6effbcf2806b" containerName="registry-server" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.883069 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.886109 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wngqc"] Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.890714 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.986048 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcvlc\" (UniqueName: \"kubernetes.io/projected/4bd3b782-6780-4d50-9e3c-391f1930b50a-kube-api-access-hcvlc\") pod \"certified-operators-wngqc\" (UID: \"4bd3b782-6780-4d50-9e3c-391f1930b50a\") " pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.986408 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd3b782-6780-4d50-9e3c-391f1930b50a-catalog-content\") pod \"certified-operators-wngqc\" (UID: \"4bd3b782-6780-4d50-9e3c-391f1930b50a\") " pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:39 crc kubenswrapper[4820]: I0203 12:12:39.986452 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd3b782-6780-4d50-9e3c-391f1930b50a-utilities\") pod \"certified-operators-wngqc\" (UID: \"4bd3b782-6780-4d50-9e3c-391f1930b50a\") " pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:40 crc kubenswrapper[4820]: I0203 12:12:40.087652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd3b782-6780-4d50-9e3c-391f1930b50a-utilities\") pod \"certified-operators-wngqc\" (UID: \"4bd3b782-6780-4d50-9e3c-391f1930b50a\") " pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:40 crc kubenswrapper[4820]: I0203 12:12:40.087884 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcvlc\" (UniqueName: \"kubernetes.io/projected/4bd3b782-6780-4d50-9e3c-391f1930b50a-kube-api-access-hcvlc\") pod \"certified-operators-wngqc\" (UID: \"4bd3b782-6780-4d50-9e3c-391f1930b50a\") " pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:40 crc kubenswrapper[4820]: I0203 12:12:40.088011 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd3b782-6780-4d50-9e3c-391f1930b50a-catalog-content\") pod \"certified-operators-wngqc\" (UID: \"4bd3b782-6780-4d50-9e3c-391f1930b50a\") " pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:40 crc kubenswrapper[4820]: I0203 12:12:40.088274 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd3b782-6780-4d50-9e3c-391f1930b50a-utilities\") pod \"certified-operators-wngqc\" (UID: \"4bd3b782-6780-4d50-9e3c-391f1930b50a\") " pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:40 crc kubenswrapper[4820]: I0203 12:12:40.088565 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd3b782-6780-4d50-9e3c-391f1930b50a-catalog-content\") pod \"certified-operators-wngqc\" (UID: \"4bd3b782-6780-4d50-9e3c-391f1930b50a\") " pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:40 crc kubenswrapper[4820]: I0203 12:12:40.124449 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcvlc\" (UniqueName: \"kubernetes.io/projected/4bd3b782-6780-4d50-9e3c-391f1930b50a-kube-api-access-hcvlc\") pod \"certified-operators-wngqc\" (UID: \"4bd3b782-6780-4d50-9e3c-391f1930b50a\") " pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:40 crc kubenswrapper[4820]: I0203 12:12:40.199544 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:40 crc kubenswrapper[4820]: I0203 12:12:40.702053 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wngqc"] Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.155406 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="030d5842-d0b7-4e4f-ad63-58848630a1ca" path="/var/lib/kubelet/pods/030d5842-d0b7-4e4f-ad63-58848630a1ca/volumes" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.156710 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fdd485f-526a-4367-ba6d-b68246ed45a0" path="/var/lib/kubelet/pods/6fdd485f-526a-4367-ba6d-b68246ed45a0/volumes" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.157507 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829fef9f-938d-4d61-9584-bf061063c952" path="/var/lib/kubelet/pods/829fef9f-938d-4d61-9584-bf061063c952/volumes" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.158864 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dde085-8a2b-4c9f-947f-441ea67b8622" path="/var/lib/kubelet/pods/92dde085-8a2b-4c9f-947f-441ea67b8622/volumes" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.314033 4820 generic.go:334] "Generic (PLEG): container finished" podID="4bd3b782-6780-4d50-9e3c-391f1930b50a" containerID="c2124738f8ee2847adcbfea2b432542e147e1aaa0dd06de4e4ca37a02813f5ce" exitCode=0 Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.314119 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wngqc" event={"ID":"4bd3b782-6780-4d50-9e3c-391f1930b50a","Type":"ContainerDied","Data":"c2124738f8ee2847adcbfea2b432542e147e1aaa0dd06de4e4ca37a02813f5ce"} Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.314147 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wngqc" event={"ID":"4bd3b782-6780-4d50-9e3c-391f1930b50a","Type":"ContainerStarted","Data":"93fcb648b12c66e9bc9e09cbe29bcc0e4f3ffbf1b29ca40ea57cd7de025cd9dd"} Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.671527 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-864d5"] Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.673064 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.675964 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.694101 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-864d5"] Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.707407 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95swq\" (UniqueName: \"kubernetes.io/projected/bfe48ea3-13a9-476b-9906-9c98aae0604c-kube-api-access-95swq\") pod \"redhat-operators-864d5\" (UID: \"bfe48ea3-13a9-476b-9906-9c98aae0604c\") " pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.707454 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe48ea3-13a9-476b-9906-9c98aae0604c-catalog-content\") pod \"redhat-operators-864d5\" (UID: \"bfe48ea3-13a9-476b-9906-9c98aae0604c\") " pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.707499 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe48ea3-13a9-476b-9906-9c98aae0604c-utilities\") pod \"redhat-operators-864d5\" (UID: \"bfe48ea3-13a9-476b-9906-9c98aae0604c\") " pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.808305 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe48ea3-13a9-476b-9906-9c98aae0604c-utilities\") pod \"redhat-operators-864d5\" (UID: \"bfe48ea3-13a9-476b-9906-9c98aae0604c\") " pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.808381 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95swq\" (UniqueName: \"kubernetes.io/projected/bfe48ea3-13a9-476b-9906-9c98aae0604c-kube-api-access-95swq\") pod \"redhat-operators-864d5\" (UID: \"bfe48ea3-13a9-476b-9906-9c98aae0604c\") " pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.808406 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe48ea3-13a9-476b-9906-9c98aae0604c-catalog-content\") pod \"redhat-operators-864d5\" (UID: \"bfe48ea3-13a9-476b-9906-9c98aae0604c\") " pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.808786 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe48ea3-13a9-476b-9906-9c98aae0604c-utilities\") pod \"redhat-operators-864d5\" (UID: \"bfe48ea3-13a9-476b-9906-9c98aae0604c\") " pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.808869 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe48ea3-13a9-476b-9906-9c98aae0604c-catalog-content\") pod \"redhat-operators-864d5\" (UID: \"bfe48ea3-13a9-476b-9906-9c98aae0604c\") " pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.827592 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95swq\" (UniqueName: \"kubernetes.io/projected/bfe48ea3-13a9-476b-9906-9c98aae0604c-kube-api-access-95swq\") pod \"redhat-operators-864d5\" (UID: \"bfe48ea3-13a9-476b-9906-9c98aae0604c\") " pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:12:41 crc kubenswrapper[4820]: I0203 12:12:41.996828 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.276975 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qt59j"] Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.278614 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.295362 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.302684 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qt59j"] Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.323021 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wngqc" event={"ID":"4bd3b782-6780-4d50-9e3c-391f1930b50a","Type":"ContainerStarted","Data":"dd447382e8c0c63ce8965a03fd7bc052c5abc96dc6fd85685860a22dd1617515"} Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.415425 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jvlz\" (UniqueName: \"kubernetes.io/projected/aa46ef09-da2f-4b32-8091-4d745eff0174-kube-api-access-5jvlz\") pod \"community-operators-qt59j\" (UID: \"aa46ef09-da2f-4b32-8091-4d745eff0174\") " pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.415506 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa46ef09-da2f-4b32-8091-4d745eff0174-catalog-content\") pod \"community-operators-qt59j\" (UID: \"aa46ef09-da2f-4b32-8091-4d745eff0174\") " pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.415542 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa46ef09-da2f-4b32-8091-4d745eff0174-utilities\") pod \"community-operators-qt59j\" (UID: \"aa46ef09-da2f-4b32-8091-4d745eff0174\") " pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.440106 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-864d5"] Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.516589 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jvlz\" (UniqueName: \"kubernetes.io/projected/aa46ef09-da2f-4b32-8091-4d745eff0174-kube-api-access-5jvlz\") pod \"community-operators-qt59j\" (UID: \"aa46ef09-da2f-4b32-8091-4d745eff0174\") " pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.516659 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa46ef09-da2f-4b32-8091-4d745eff0174-catalog-content\") pod \"community-operators-qt59j\" (UID: \"aa46ef09-da2f-4b32-8091-4d745eff0174\") " pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.516692 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa46ef09-da2f-4b32-8091-4d745eff0174-utilities\") pod \"community-operators-qt59j\" (UID: \"aa46ef09-da2f-4b32-8091-4d745eff0174\") " pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.517178 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa46ef09-da2f-4b32-8091-4d745eff0174-catalog-content\") pod \"community-operators-qt59j\" (UID: \"aa46ef09-da2f-4b32-8091-4d745eff0174\") " pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.517254 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa46ef09-da2f-4b32-8091-4d745eff0174-utilities\") pod \"community-operators-qt59j\" (UID: \"aa46ef09-da2f-4b32-8091-4d745eff0174\") " pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.542268 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jvlz\" (UniqueName: \"kubernetes.io/projected/aa46ef09-da2f-4b32-8091-4d745eff0174-kube-api-access-5jvlz\") pod \"community-operators-qt59j\" (UID: \"aa46ef09-da2f-4b32-8091-4d745eff0174\") " pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:42 crc kubenswrapper[4820]: I0203 12:12:42.614589 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:43 crc kubenswrapper[4820]: I0203 12:12:43.009502 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qt59j"] Feb 03 12:12:43 crc kubenswrapper[4820]: I0203 12:12:43.330676 4820 generic.go:334] "Generic (PLEG): container finished" podID="4bd3b782-6780-4d50-9e3c-391f1930b50a" containerID="dd447382e8c0c63ce8965a03fd7bc052c5abc96dc6fd85685860a22dd1617515" exitCode=0 Feb 03 12:12:43 crc kubenswrapper[4820]: I0203 12:12:43.330876 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wngqc" event={"ID":"4bd3b782-6780-4d50-9e3c-391f1930b50a","Type":"ContainerDied","Data":"dd447382e8c0c63ce8965a03fd7bc052c5abc96dc6fd85685860a22dd1617515"} Feb 03 12:12:43 crc kubenswrapper[4820]: I0203 12:12:43.332791 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:12:43 crc kubenswrapper[4820]: I0203 12:12:43.334359 4820 generic.go:334] "Generic (PLEG): container finished" podID="aa46ef09-da2f-4b32-8091-4d745eff0174" containerID="adb3bce442f0dd602766254c6599b928677956a56bcae6757e64d8ce994d4538" exitCode=0 Feb 03 12:12:43 crc kubenswrapper[4820]: I0203 12:12:43.334416 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qt59j" event={"ID":"aa46ef09-da2f-4b32-8091-4d745eff0174","Type":"ContainerDied","Data":"adb3bce442f0dd602766254c6599b928677956a56bcae6757e64d8ce994d4538"} Feb 03 12:12:43 crc kubenswrapper[4820]: I0203 12:12:43.335305 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qt59j" event={"ID":"aa46ef09-da2f-4b32-8091-4d745eff0174","Type":"ContainerStarted","Data":"19dabf5f15437939828e009a8ce0646d47f8887e1e2524b44bfe1b32402ee4b3"} Feb 03 12:12:43 crc kubenswrapper[4820]: I0203 12:12:43.339424 4820 generic.go:334] "Generic (PLEG): container finished" podID="bfe48ea3-13a9-476b-9906-9c98aae0604c" containerID="e1197927532153fe42fd679d1d1c8608b15aba2c492fe84f1ed1842cbd6f1836" exitCode=0 Feb 03 12:12:43 crc kubenswrapper[4820]: I0203 12:12:43.339495 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-864d5" event={"ID":"bfe48ea3-13a9-476b-9906-9c98aae0604c","Type":"ContainerDied","Data":"e1197927532153fe42fd679d1d1c8608b15aba2c492fe84f1ed1842cbd6f1836"} Feb 03 12:12:43 crc kubenswrapper[4820]: I0203 12:12:43.339526 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-864d5" event={"ID":"bfe48ea3-13a9-476b-9906-9c98aae0604c","Type":"ContainerStarted","Data":"02f1f1879aab04372157ac7f70e5e567ed008e582c2b57166e6bae9074f8b56f"} Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.068614 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6kzpp"] Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.069955 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.072506 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.120763 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6kzpp"] Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.137543 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2036cb3-d406-4eea-8eac-3fda178af56a-catalog-content\") pod \"redhat-marketplace-6kzpp\" (UID: \"d2036cb3-d406-4eea-8eac-3fda178af56a\") " pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.137613 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drxd9\" (UniqueName: \"kubernetes.io/projected/d2036cb3-d406-4eea-8eac-3fda178af56a-kube-api-access-drxd9\") pod \"redhat-marketplace-6kzpp\" (UID: \"d2036cb3-d406-4eea-8eac-3fda178af56a\") " pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.137747 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2036cb3-d406-4eea-8eac-3fda178af56a-utilities\") pod \"redhat-marketplace-6kzpp\" (UID: \"d2036cb3-d406-4eea-8eac-3fda178af56a\") " pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.239514 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drxd9\" (UniqueName: \"kubernetes.io/projected/d2036cb3-d406-4eea-8eac-3fda178af56a-kube-api-access-drxd9\") pod \"redhat-marketplace-6kzpp\" (UID: \"d2036cb3-d406-4eea-8eac-3fda178af56a\") " pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.239585 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2036cb3-d406-4eea-8eac-3fda178af56a-utilities\") pod \"redhat-marketplace-6kzpp\" (UID: \"d2036cb3-d406-4eea-8eac-3fda178af56a\") " pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.239632 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2036cb3-d406-4eea-8eac-3fda178af56a-catalog-content\") pod \"redhat-marketplace-6kzpp\" (UID: \"d2036cb3-d406-4eea-8eac-3fda178af56a\") " pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.240198 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2036cb3-d406-4eea-8eac-3fda178af56a-catalog-content\") pod \"redhat-marketplace-6kzpp\" (UID: \"d2036cb3-d406-4eea-8eac-3fda178af56a\") " pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.240365 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2036cb3-d406-4eea-8eac-3fda178af56a-utilities\") pod \"redhat-marketplace-6kzpp\" (UID: \"d2036cb3-d406-4eea-8eac-3fda178af56a\") " pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.274838 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drxd9\" (UniqueName: \"kubernetes.io/projected/d2036cb3-d406-4eea-8eac-3fda178af56a-kube-api-access-drxd9\") pod \"redhat-marketplace-6kzpp\" (UID: \"d2036cb3-d406-4eea-8eac-3fda178af56a\") " pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.346873 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wngqc" event={"ID":"4bd3b782-6780-4d50-9e3c-391f1930b50a","Type":"ContainerStarted","Data":"cfc072c4eb27cb0b1636e163f754aa7169d2f2528637f192d88882e04a3b77a9"} Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.348617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qt59j" event={"ID":"aa46ef09-da2f-4b32-8091-4d745eff0174","Type":"ContainerStarted","Data":"c2ea9af95b7df7c1cea1f3404b49b2ac46919fb750a63ccfd2df69526515e5e6"} Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.371109 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wngqc" podStartSLOduration=2.953152346 podStartE2EDuration="5.371090077s" podCreationTimestamp="2026-02-03 12:12:39 +0000 UTC" firstStartedPulling="2026-02-03 12:12:41.316520111 +0000 UTC m=+478.839595975" lastFinishedPulling="2026-02-03 12:12:43.734457842 +0000 UTC m=+481.257533706" observedRunningTime="2026-02-03 12:12:44.369498918 +0000 UTC m=+481.892574792" watchObservedRunningTime="2026-02-03 12:12:44.371090077 +0000 UTC m=+481.894165951" Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.400322 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:12:44 crc kubenswrapper[4820]: I0203 12:12:44.836982 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6kzpp"] Feb 03 12:12:44 crc kubenswrapper[4820]: W0203 12:12:44.844386 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2036cb3_d406_4eea_8eac_3fda178af56a.slice/crio-849147054b8e8a3d3bba2f8f9fc3554f3724405a63c2ae6508d1825979d2c031 WatchSource:0}: Error finding container 849147054b8e8a3d3bba2f8f9fc3554f3724405a63c2ae6508d1825979d2c031: Status 404 returned error can't find the container with id 849147054b8e8a3d3bba2f8f9fc3554f3724405a63c2ae6508d1825979d2c031 Feb 03 12:12:45 crc kubenswrapper[4820]: I0203 12:12:45.361225 4820 generic.go:334] "Generic (PLEG): container finished" podID="aa46ef09-da2f-4b32-8091-4d745eff0174" containerID="c2ea9af95b7df7c1cea1f3404b49b2ac46919fb750a63ccfd2df69526515e5e6" exitCode=0 Feb 03 12:12:45 crc kubenswrapper[4820]: I0203 12:12:45.361549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qt59j" event={"ID":"aa46ef09-da2f-4b32-8091-4d745eff0174","Type":"ContainerDied","Data":"c2ea9af95b7df7c1cea1f3404b49b2ac46919fb750a63ccfd2df69526515e5e6"} Feb 03 12:12:45 crc kubenswrapper[4820]: I0203 12:12:45.363444 4820 generic.go:334] "Generic (PLEG): container finished" podID="d2036cb3-d406-4eea-8eac-3fda178af56a" containerID="b3f09e09309680f09364fd8f5eb93617595bd5f7c56cc9153b4fece9b5837913" exitCode=0 Feb 03 12:12:45 crc kubenswrapper[4820]: I0203 12:12:45.363509 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6kzpp" event={"ID":"d2036cb3-d406-4eea-8eac-3fda178af56a","Type":"ContainerDied","Data":"b3f09e09309680f09364fd8f5eb93617595bd5f7c56cc9153b4fece9b5837913"} Feb 03 12:12:45 crc kubenswrapper[4820]: I0203 12:12:45.363534 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6kzpp" event={"ID":"d2036cb3-d406-4eea-8eac-3fda178af56a","Type":"ContainerStarted","Data":"849147054b8e8a3d3bba2f8f9fc3554f3724405a63c2ae6508d1825979d2c031"} Feb 03 12:12:45 crc kubenswrapper[4820]: I0203 12:12:45.368221 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-864d5" event={"ID":"bfe48ea3-13a9-476b-9906-9c98aae0604c","Type":"ContainerStarted","Data":"788e9291dbccb903ad11f7618b2679a978d05e7158c76a7623331888d4d8d632"} Feb 03 12:12:46 crc kubenswrapper[4820]: I0203 12:12:46.377506 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qt59j" event={"ID":"aa46ef09-da2f-4b32-8091-4d745eff0174","Type":"ContainerStarted","Data":"8eb7cd1bcc392bac9a99ea9cb1e7430bf8e33c0acdd3151c90390edca4e04a10"} Feb 03 12:12:46 crc kubenswrapper[4820]: I0203 12:12:46.379553 4820 generic.go:334] "Generic (PLEG): container finished" podID="bfe48ea3-13a9-476b-9906-9c98aae0604c" containerID="788e9291dbccb903ad11f7618b2679a978d05e7158c76a7623331888d4d8d632" exitCode=0 Feb 03 12:12:46 crc kubenswrapper[4820]: I0203 12:12:46.379590 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-864d5" event={"ID":"bfe48ea3-13a9-476b-9906-9c98aae0604c","Type":"ContainerDied","Data":"788e9291dbccb903ad11f7618b2679a978d05e7158c76a7623331888d4d8d632"} Feb 03 12:12:46 crc kubenswrapper[4820]: I0203 12:12:46.401239 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qt59j" podStartSLOduration=1.726354901 podStartE2EDuration="4.401215244s" podCreationTimestamp="2026-02-03 12:12:42 +0000 UTC" firstStartedPulling="2026-02-03 12:12:43.336805392 +0000 UTC m=+480.859881256" lastFinishedPulling="2026-02-03 12:12:46.011665735 +0000 UTC m=+483.534741599" observedRunningTime="2026-02-03 12:12:46.397663074 +0000 UTC m=+483.920738938" watchObservedRunningTime="2026-02-03 12:12:46.401215244 +0000 UTC m=+483.924291158" Feb 03 12:12:47 crc kubenswrapper[4820]: I0203 12:12:47.387212 4820 generic.go:334] "Generic (PLEG): container finished" podID="d2036cb3-d406-4eea-8eac-3fda178af56a" containerID="4116658d2fc5be9694685d9df84bb58364683f995e6e78ee0821c5020dfcee69" exitCode=0 Feb 03 12:12:47 crc kubenswrapper[4820]: I0203 12:12:47.387273 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6kzpp" event={"ID":"d2036cb3-d406-4eea-8eac-3fda178af56a","Type":"ContainerDied","Data":"4116658d2fc5be9694685d9df84bb58364683f995e6e78ee0821c5020dfcee69"} Feb 03 12:12:47 crc kubenswrapper[4820]: I0203 12:12:47.390807 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-864d5" event={"ID":"bfe48ea3-13a9-476b-9906-9c98aae0604c","Type":"ContainerStarted","Data":"d8a2d29baa9249ff13ad1753d5867f064975cd54a5bcb9b331dc252d4cf7cbad"} Feb 03 12:12:47 crc kubenswrapper[4820]: I0203 12:12:47.432828 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-864d5" podStartSLOduration=2.927143036 podStartE2EDuration="6.432809926s" podCreationTimestamp="2026-02-03 12:12:41 +0000 UTC" firstStartedPulling="2026-02-03 12:12:43.341860608 +0000 UTC m=+480.864936472" lastFinishedPulling="2026-02-03 12:12:46.847527498 +0000 UTC m=+484.370603362" observedRunningTime="2026-02-03 12:12:47.430021579 +0000 UTC m=+484.953097473" watchObservedRunningTime="2026-02-03 12:12:47.432809926 +0000 UTC m=+484.955885790" Feb 03 12:12:50 crc kubenswrapper[4820]: I0203 12:12:50.200464 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:50 crc kubenswrapper[4820]: I0203 12:12:50.201086 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:50 crc kubenswrapper[4820]: I0203 12:12:50.253153 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:50 crc kubenswrapper[4820]: I0203 12:12:50.443656 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wngqc" Feb 03 12:12:51 crc kubenswrapper[4820]: I0203 12:12:51.997640 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:12:51 crc kubenswrapper[4820]: I0203 12:12:51.998081 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:12:52 crc kubenswrapper[4820]: I0203 12:12:52.614879 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:52 crc kubenswrapper[4820]: I0203 12:12:52.615238 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:52 crc kubenswrapper[4820]: I0203 12:12:52.669838 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:53 crc kubenswrapper[4820]: I0203 12:12:53.040538 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-864d5" podUID="bfe48ea3-13a9-476b-9906-9c98aae0604c" containerName="registry-server" probeResult="failure" output=< Feb 03 12:12:53 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:12:53 crc kubenswrapper[4820]: > Feb 03 12:12:53 crc kubenswrapper[4820]: I0203 12:12:53.442004 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6kzpp" event={"ID":"d2036cb3-d406-4eea-8eac-3fda178af56a","Type":"ContainerStarted","Data":"0de3d5fe2a70c2d574f7bb882437829bd7c654518925a709107c08e04e8d477a"} Feb 03 12:12:53 crc kubenswrapper[4820]: I0203 12:12:53.488277 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qt59j" Feb 03 12:12:53 crc kubenswrapper[4820]: I0203 12:12:53.511295 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6kzpp" podStartSLOduration=2.368992571 podStartE2EDuration="9.511262384s" podCreationTimestamp="2026-02-03 12:12:44 +0000 UTC" firstStartedPulling="2026-02-03 12:12:45.365551325 +0000 UTC m=+482.888627189" lastFinishedPulling="2026-02-03 12:12:52.507821128 +0000 UTC m=+490.030897002" observedRunningTime="2026-02-03 12:12:53.476684959 +0000 UTC m=+490.999760853" watchObservedRunningTime="2026-02-03 12:12:53.511262384 +0000 UTC m=+491.034338268" Feb 03 12:12:54 crc kubenswrapper[4820]: I0203 12:12:54.400934 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:12:54 crc kubenswrapper[4820]: I0203 12:12:54.400994 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:12:55 crc kubenswrapper[4820]: I0203 12:12:55.455877 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6kzpp" podUID="d2036cb3-d406-4eea-8eac-3fda178af56a" containerName="registry-server" probeResult="failure" output=< Feb 03 12:12:55 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:12:55 crc kubenswrapper[4820]: > Feb 03 12:13:02 crc kubenswrapper[4820]: I0203 12:13:02.047215 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:13:02 crc kubenswrapper[4820]: I0203 12:13:02.096146 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-864d5" Feb 03 12:13:04 crc kubenswrapper[4820]: I0203 12:13:04.443288 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:13:04 crc kubenswrapper[4820]: I0203 12:13:04.489365 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6kzpp" Feb 03 12:14:13 crc kubenswrapper[4820]: I0203 12:14:13.555916 4820 scope.go:117] "RemoveContainer" containerID="84a8cfea8877c064fe516848d18a880005f2324d29b6ce26da7f90ed55b78bdd" Feb 03 12:14:31 crc kubenswrapper[4820]: I0203 12:14:31.366191 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:14:31 crc kubenswrapper[4820]: I0203 12:14:31.367885 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.179292 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv"] Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.180741 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.182605 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.184471 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.197984 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv"] Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.311448 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eca46b09-00ea-4c46-b9d1-3a297633f397-config-volume\") pod \"collect-profiles-29502015-6rqhv\" (UID: \"eca46b09-00ea-4c46-b9d1-3a297633f397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.311557 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzbss\" (UniqueName: \"kubernetes.io/projected/eca46b09-00ea-4c46-b9d1-3a297633f397-kube-api-access-rzbss\") pod \"collect-profiles-29502015-6rqhv\" (UID: \"eca46b09-00ea-4c46-b9d1-3a297633f397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.311621 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eca46b09-00ea-4c46-b9d1-3a297633f397-secret-volume\") pod \"collect-profiles-29502015-6rqhv\" (UID: \"eca46b09-00ea-4c46-b9d1-3a297633f397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.412774 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eca46b09-00ea-4c46-b9d1-3a297633f397-config-volume\") pod \"collect-profiles-29502015-6rqhv\" (UID: \"eca46b09-00ea-4c46-b9d1-3a297633f397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.412858 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzbss\" (UniqueName: \"kubernetes.io/projected/eca46b09-00ea-4c46-b9d1-3a297633f397-kube-api-access-rzbss\") pod \"collect-profiles-29502015-6rqhv\" (UID: \"eca46b09-00ea-4c46-b9d1-3a297633f397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.412968 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eca46b09-00ea-4c46-b9d1-3a297633f397-secret-volume\") pod \"collect-profiles-29502015-6rqhv\" (UID: \"eca46b09-00ea-4c46-b9d1-3a297633f397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.414039 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eca46b09-00ea-4c46-b9d1-3a297633f397-config-volume\") pod \"collect-profiles-29502015-6rqhv\" (UID: \"eca46b09-00ea-4c46-b9d1-3a297633f397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.426534 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eca46b09-00ea-4c46-b9d1-3a297633f397-secret-volume\") pod \"collect-profiles-29502015-6rqhv\" (UID: \"eca46b09-00ea-4c46-b9d1-3a297633f397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.430057 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzbss\" (UniqueName: \"kubernetes.io/projected/eca46b09-00ea-4c46-b9d1-3a297633f397-kube-api-access-rzbss\") pod \"collect-profiles-29502015-6rqhv\" (UID: \"eca46b09-00ea-4c46-b9d1-3a297633f397\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.542671 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" Feb 03 12:15:00 crc kubenswrapper[4820]: I0203 12:15:00.958476 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv"] Feb 03 12:15:01 crc kubenswrapper[4820]: I0203 12:15:01.366258 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:15:01 crc kubenswrapper[4820]: I0203 12:15:01.366656 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:15:01 crc kubenswrapper[4820]: I0203 12:15:01.407122 4820 generic.go:334] "Generic (PLEG): container finished" podID="eca46b09-00ea-4c46-b9d1-3a297633f397" containerID="3864b7c76f9063a1034a8c3b5ddcd22fa6251d83edf96069fdde52222f0ee0d2" exitCode=0 Feb 03 12:15:01 crc kubenswrapper[4820]: I0203 12:15:01.407186 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" event={"ID":"eca46b09-00ea-4c46-b9d1-3a297633f397","Type":"ContainerDied","Data":"3864b7c76f9063a1034a8c3b5ddcd22fa6251d83edf96069fdde52222f0ee0d2"} Feb 03 12:15:01 crc kubenswrapper[4820]: I0203 12:15:01.407237 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" event={"ID":"eca46b09-00ea-4c46-b9d1-3a297633f397","Type":"ContainerStarted","Data":"96a3f2de5ed2cd64c6fe30b39673b6333e104f7c95f4bb08f9318253fb045df5"} Feb 03 12:15:03 crc kubenswrapper[4820]: I0203 12:15:02.615436 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" Feb 03 12:15:03 crc kubenswrapper[4820]: I0203 12:15:02.769166 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eca46b09-00ea-4c46-b9d1-3a297633f397-secret-volume\") pod \"eca46b09-00ea-4c46-b9d1-3a297633f397\" (UID: \"eca46b09-00ea-4c46-b9d1-3a297633f397\") " Feb 03 12:15:03 crc kubenswrapper[4820]: I0203 12:15:02.769254 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzbss\" (UniqueName: \"kubernetes.io/projected/eca46b09-00ea-4c46-b9d1-3a297633f397-kube-api-access-rzbss\") pod \"eca46b09-00ea-4c46-b9d1-3a297633f397\" (UID: \"eca46b09-00ea-4c46-b9d1-3a297633f397\") " Feb 03 12:15:03 crc kubenswrapper[4820]: I0203 12:15:02.769337 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eca46b09-00ea-4c46-b9d1-3a297633f397-config-volume\") pod \"eca46b09-00ea-4c46-b9d1-3a297633f397\" (UID: \"eca46b09-00ea-4c46-b9d1-3a297633f397\") " Feb 03 12:15:03 crc kubenswrapper[4820]: I0203 12:15:02.770444 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca46b09-00ea-4c46-b9d1-3a297633f397-config-volume" (OuterVolumeSpecName: "config-volume") pod "eca46b09-00ea-4c46-b9d1-3a297633f397" (UID: "eca46b09-00ea-4c46-b9d1-3a297633f397"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:15:03 crc kubenswrapper[4820]: I0203 12:15:02.775248 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca46b09-00ea-4c46-b9d1-3a297633f397-kube-api-access-rzbss" (OuterVolumeSpecName: "kube-api-access-rzbss") pod "eca46b09-00ea-4c46-b9d1-3a297633f397" (UID: "eca46b09-00ea-4c46-b9d1-3a297633f397"). InnerVolumeSpecName "kube-api-access-rzbss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:15:03 crc kubenswrapper[4820]: I0203 12:15:02.785041 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eca46b09-00ea-4c46-b9d1-3a297633f397-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eca46b09-00ea-4c46-b9d1-3a297633f397" (UID: "eca46b09-00ea-4c46-b9d1-3a297633f397"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:15:03 crc kubenswrapper[4820]: I0203 12:15:02.871632 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eca46b09-00ea-4c46-b9d1-3a297633f397-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:15:03 crc kubenswrapper[4820]: I0203 12:15:02.871672 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eca46b09-00ea-4c46-b9d1-3a297633f397-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:15:03 crc kubenswrapper[4820]: I0203 12:15:02.871686 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzbss\" (UniqueName: \"kubernetes.io/projected/eca46b09-00ea-4c46-b9d1-3a297633f397-kube-api-access-rzbss\") on node \"crc\" DevicePath \"\"" Feb 03 12:15:03 crc kubenswrapper[4820]: I0203 12:15:03.419543 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" event={"ID":"eca46b09-00ea-4c46-b9d1-3a297633f397","Type":"ContainerDied","Data":"96a3f2de5ed2cd64c6fe30b39673b6333e104f7c95f4bb08f9318253fb045df5"} Feb 03 12:15:03 crc kubenswrapper[4820]: I0203 12:15:03.419582 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96a3f2de5ed2cd64c6fe30b39673b6333e104f7c95f4bb08f9318253fb045df5" Feb 03 12:15:03 crc kubenswrapper[4820]: I0203 12:15:03.419627 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv" Feb 03 12:15:31 crc kubenswrapper[4820]: I0203 12:15:31.365417 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:15:31 crc kubenswrapper[4820]: I0203 12:15:31.366032 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:15:31 crc kubenswrapper[4820]: I0203 12:15:31.366093 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:15:31 crc kubenswrapper[4820]: I0203 12:15:31.366703 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"856f08893c1ddb14ce7ea228b3d8908439ab8ab4b376483bac3f27a346ac49e0"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:15:31 crc kubenswrapper[4820]: I0203 12:15:31.366826 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://856f08893c1ddb14ce7ea228b3d8908439ab8ab4b376483bac3f27a346ac49e0" gracePeriod=600 Feb 03 12:15:31 crc kubenswrapper[4820]: I0203 12:15:31.595318 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="856f08893c1ddb14ce7ea228b3d8908439ab8ab4b376483bac3f27a346ac49e0" exitCode=0 Feb 03 12:15:31 crc kubenswrapper[4820]: I0203 12:15:31.595396 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"856f08893c1ddb14ce7ea228b3d8908439ab8ab4b376483bac3f27a346ac49e0"} Feb 03 12:15:31 crc kubenswrapper[4820]: I0203 12:15:31.595454 4820 scope.go:117] "RemoveContainer" containerID="4e6a324869d2f58d634802d3f06668e5da2b1da1808292287787329971cfd4aa" Feb 03 12:15:32 crc kubenswrapper[4820]: I0203 12:15:32.602329 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"6776805bdea74d9ea3fdad5be16a8319bda906899f9d28fa7cc0a1b3ab400cbf"} Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.506959 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bctnf"] Feb 03 12:16:12 crc kubenswrapper[4820]: E0203 12:16:12.507794 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca46b09-00ea-4c46-b9d1-3a297633f397" containerName="collect-profiles" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.507810 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca46b09-00ea-4c46-b9d1-3a297633f397" containerName="collect-profiles" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.508000 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca46b09-00ea-4c46-b9d1-3a297633f397" containerName="collect-profiles" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.508479 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.518492 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bctnf"] Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.633973 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/231aa28b-bb49-4602-ae23-a6b4070db669-bound-sa-token\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.634242 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/231aa28b-bb49-4602-ae23-a6b4070db669-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.634371 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/231aa28b-bb49-4602-ae23-a6b4070db669-trusted-ca\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.634481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2knsk\" (UniqueName: \"kubernetes.io/projected/231aa28b-bb49-4602-ae23-a6b4070db669-kube-api-access-2knsk\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.634665 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/231aa28b-bb49-4602-ae23-a6b4070db669-registry-certificates\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.634730 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/231aa28b-bb49-4602-ae23-a6b4070db669-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.634926 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.634989 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/231aa28b-bb49-4602-ae23-a6b4070db669-registry-tls\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.661515 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.736530 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2knsk\" (UniqueName: \"kubernetes.io/projected/231aa28b-bb49-4602-ae23-a6b4070db669-kube-api-access-2knsk\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.736578 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/231aa28b-bb49-4602-ae23-a6b4070db669-registry-certificates\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.736597 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/231aa28b-bb49-4602-ae23-a6b4070db669-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.736652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/231aa28b-bb49-4602-ae23-a6b4070db669-registry-tls\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.736686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/231aa28b-bb49-4602-ae23-a6b4070db669-bound-sa-token\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.736716 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/231aa28b-bb49-4602-ae23-a6b4070db669-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.736745 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/231aa28b-bb49-4602-ae23-a6b4070db669-trusted-ca\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.738217 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/231aa28b-bb49-4602-ae23-a6b4070db669-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.738404 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/231aa28b-bb49-4602-ae23-a6b4070db669-trusted-ca\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.739504 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/231aa28b-bb49-4602-ae23-a6b4070db669-registry-certificates\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.747238 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/231aa28b-bb49-4602-ae23-a6b4070db669-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.756820 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2knsk\" (UniqueName: \"kubernetes.io/projected/231aa28b-bb49-4602-ae23-a6b4070db669-kube-api-access-2knsk\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.759854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/231aa28b-bb49-4602-ae23-a6b4070db669-bound-sa-token\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.761055 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/231aa28b-bb49-4602-ae23-a6b4070db669-registry-tls\") pod \"image-registry-66df7c8f76-bctnf\" (UID: \"231aa28b-bb49-4602-ae23-a6b4070db669\") " pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:12 crc kubenswrapper[4820]: I0203 12:16:12.829729 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:13 crc kubenswrapper[4820]: I0203 12:16:13.111250 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bctnf"] Feb 03 12:16:13 crc kubenswrapper[4820]: I0203 12:16:13.992548 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" event={"ID":"231aa28b-bb49-4602-ae23-a6b4070db669","Type":"ContainerStarted","Data":"7a129b765972262e939c8b68a0cdfe9c4160105945f22706d717faf2c301a5ec"} Feb 03 12:16:13 crc kubenswrapper[4820]: I0203 12:16:13.993069 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" event={"ID":"231aa28b-bb49-4602-ae23-a6b4070db669","Type":"ContainerStarted","Data":"cadff4a6550f9292ae3cd2d9909f00e3b655831d2c8f758056c15c74befdf7c2"} Feb 03 12:16:13 crc kubenswrapper[4820]: I0203 12:16:13.993118 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:14 crc kubenswrapper[4820]: I0203 12:16:14.021417 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" podStartSLOduration=2.021398709 podStartE2EDuration="2.021398709s" podCreationTimestamp="2026-02-03 12:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:16:14.018985855 +0000 UTC m=+691.542061739" watchObservedRunningTime="2026-02-03 12:16:14.021398709 +0000 UTC m=+691.544474573" Feb 03 12:16:32 crc kubenswrapper[4820]: I0203 12:16:32.834112 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-bctnf" Feb 03 12:16:32 crc kubenswrapper[4820]: I0203 12:16:32.887522 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qpxpv"] Feb 03 12:16:57 crc kubenswrapper[4820]: I0203 12:16:57.926957 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" podUID="10fa9c2b-e370-400e-9e71-a4617592b411" containerName="registry" containerID="cri-o://9746ebaefe29531eb0bdf937c38bb73eaac7c8db11c2bb0ea0458959f35747ca" gracePeriod=30 Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.281782 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.390504 4820 generic.go:334] "Generic (PLEG): container finished" podID="10fa9c2b-e370-400e-9e71-a4617592b411" containerID="9746ebaefe29531eb0bdf937c38bb73eaac7c8db11c2bb0ea0458959f35747ca" exitCode=0 Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.390770 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" event={"ID":"10fa9c2b-e370-400e-9e71-a4617592b411","Type":"ContainerDied","Data":"9746ebaefe29531eb0bdf937c38bb73eaac7c8db11c2bb0ea0458959f35747ca"} Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.390848 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" event={"ID":"10fa9c2b-e370-400e-9e71-a4617592b411","Type":"ContainerDied","Data":"af10c8cddd8a400daea58e6865ad1efa261abface89491db9bc4d9877f70eb27"} Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.390868 4820 scope.go:117] "RemoveContainer" containerID="9746ebaefe29531eb0bdf937c38bb73eaac7c8db11c2bb0ea0458959f35747ca" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.391092 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qpxpv" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.395572 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10fa9c2b-e370-400e-9e71-a4617592b411-trusted-ca\") pod \"10fa9c2b-e370-400e-9e71-a4617592b411\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.395668 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/10fa9c2b-e370-400e-9e71-a4617592b411-installation-pull-secrets\") pod \"10fa9c2b-e370-400e-9e71-a4617592b411\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.395862 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"10fa9c2b-e370-400e-9e71-a4617592b411\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.395918 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-bound-sa-token\") pod \"10fa9c2b-e370-400e-9e71-a4617592b411\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.395961 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-registry-tls\") pod \"10fa9c2b-e370-400e-9e71-a4617592b411\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.395990 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/10fa9c2b-e370-400e-9e71-a4617592b411-registry-certificates\") pod \"10fa9c2b-e370-400e-9e71-a4617592b411\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.396022 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xnkl\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-kube-api-access-6xnkl\") pod \"10fa9c2b-e370-400e-9e71-a4617592b411\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.396077 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/10fa9c2b-e370-400e-9e71-a4617592b411-ca-trust-extracted\") pod \"10fa9c2b-e370-400e-9e71-a4617592b411\" (UID: \"10fa9c2b-e370-400e-9e71-a4617592b411\") " Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.396801 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10fa9c2b-e370-400e-9e71-a4617592b411-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "10fa9c2b-e370-400e-9e71-a4617592b411" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.396862 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10fa9c2b-e370-400e-9e71-a4617592b411-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "10fa9c2b-e370-400e-9e71-a4617592b411" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.407768 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "10fa9c2b-e370-400e-9e71-a4617592b411" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.407906 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10fa9c2b-e370-400e-9e71-a4617592b411-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "10fa9c2b-e370-400e-9e71-a4617592b411" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.408080 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "10fa9c2b-e370-400e-9e71-a4617592b411" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.409047 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-kube-api-access-6xnkl" (OuterVolumeSpecName: "kube-api-access-6xnkl") pod "10fa9c2b-e370-400e-9e71-a4617592b411" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411"). InnerVolumeSpecName "kube-api-access-6xnkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.409378 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "10fa9c2b-e370-400e-9e71-a4617592b411" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.410402 4820 scope.go:117] "RemoveContainer" containerID="9746ebaefe29531eb0bdf937c38bb73eaac7c8db11c2bb0ea0458959f35747ca" Feb 03 12:16:58 crc kubenswrapper[4820]: E0203 12:16:58.411370 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9746ebaefe29531eb0bdf937c38bb73eaac7c8db11c2bb0ea0458959f35747ca\": container with ID starting with 9746ebaefe29531eb0bdf937c38bb73eaac7c8db11c2bb0ea0458959f35747ca not found: ID does not exist" containerID="9746ebaefe29531eb0bdf937c38bb73eaac7c8db11c2bb0ea0458959f35747ca" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.411532 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9746ebaefe29531eb0bdf937c38bb73eaac7c8db11c2bb0ea0458959f35747ca"} err="failed to get container status \"9746ebaefe29531eb0bdf937c38bb73eaac7c8db11c2bb0ea0458959f35747ca\": rpc error: code = NotFound desc = could not find container \"9746ebaefe29531eb0bdf937c38bb73eaac7c8db11c2bb0ea0458959f35747ca\": container with ID starting with 9746ebaefe29531eb0bdf937c38bb73eaac7c8db11c2bb0ea0458959f35747ca not found: ID does not exist" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.415057 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10fa9c2b-e370-400e-9e71-a4617592b411-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "10fa9c2b-e370-400e-9e71-a4617592b411" (UID: "10fa9c2b-e370-400e-9e71-a4617592b411"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.497326 4820 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/10fa9c2b-e370-400e-9e71-a4617592b411-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.497397 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.497413 4820 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.497426 4820 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/10fa9c2b-e370-400e-9e71-a4617592b411-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.497438 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xnkl\" (UniqueName: \"kubernetes.io/projected/10fa9c2b-e370-400e-9e71-a4617592b411-kube-api-access-6xnkl\") on node \"crc\" DevicePath \"\"" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.497451 4820 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/10fa9c2b-e370-400e-9e71-a4617592b411-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.497462 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10fa9c2b-e370-400e-9e71-a4617592b411-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.728109 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qpxpv"] Feb 03 12:16:58 crc kubenswrapper[4820]: I0203 12:16:58.733946 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qpxpv"] Feb 03 12:16:59 crc kubenswrapper[4820]: I0203 12:16:59.157349 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10fa9c2b-e370-400e-9e71-a4617592b411" path="/var/lib/kubelet/pods/10fa9c2b-e370-400e-9e71-a4617592b411/volumes" Feb 03 12:17:31 crc kubenswrapper[4820]: I0203 12:17:31.365418 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:17:31 crc kubenswrapper[4820]: I0203 12:17:31.366059 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:17:44 crc kubenswrapper[4820]: I0203 12:17:44.269494 4820 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 12:18:01 crc kubenswrapper[4820]: I0203 12:18:01.365527 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:18:01 crc kubenswrapper[4820]: I0203 12:18:01.366129 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:18:31 crc kubenswrapper[4820]: I0203 12:18:31.365760 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:18:31 crc kubenswrapper[4820]: I0203 12:18:31.366317 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:18:31 crc kubenswrapper[4820]: I0203 12:18:31.366369 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:18:31 crc kubenswrapper[4820]: I0203 12:18:31.367155 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6776805bdea74d9ea3fdad5be16a8319bda906899f9d28fa7cc0a1b3ab400cbf"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:18:31 crc kubenswrapper[4820]: I0203 12:18:31.367209 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://6776805bdea74d9ea3fdad5be16a8319bda906899f9d28fa7cc0a1b3ab400cbf" gracePeriod=600 Feb 03 12:18:31 crc kubenswrapper[4820]: I0203 12:18:31.994626 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="6776805bdea74d9ea3fdad5be16a8319bda906899f9d28fa7cc0a1b3ab400cbf" exitCode=0 Feb 03 12:18:31 crc kubenswrapper[4820]: I0203 12:18:31.994788 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"6776805bdea74d9ea3fdad5be16a8319bda906899f9d28fa7cc0a1b3ab400cbf"} Feb 03 12:18:31 crc kubenswrapper[4820]: I0203 12:18:31.995005 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"f961bb48cccbb18f37545a37a50be08f55d027f113f203a762d6ed87bcedcb42"} Feb 03 12:18:31 crc kubenswrapper[4820]: I0203 12:18:31.995039 4820 scope.go:117] "RemoveContainer" containerID="856f08893c1ddb14ce7ea228b3d8908439ab8ab4b376483bac3f27a346ac49e0" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.336710 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vdqjl"] Feb 03 12:20:00 crc kubenswrapper[4820]: E0203 12:20:00.337437 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fa9c2b-e370-400e-9e71-a4617592b411" containerName="registry" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.337448 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fa9c2b-e370-400e-9e71-a4617592b411" containerName="registry" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.337552 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="10fa9c2b-e370-400e-9e71-a4617592b411" containerName="registry" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.337976 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vdqjl" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.342871 4820 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-26hwv" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.343492 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.343753 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.352735 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-29s2s"] Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.359056 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-29s2s" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.361213 4820 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-7f5pl" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.363420 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vdqjl"] Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.373274 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lb2pj"] Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.374271 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-lb2pj" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.377090 4820 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-ff7sd" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.381247 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-29s2s"] Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.387866 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lb2pj"] Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.455918 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6m8v\" (UniqueName: \"kubernetes.io/projected/3853758e-3847-4715-8b8a-85022e708c75-kube-api-access-z6m8v\") pod \"cert-manager-cainjector-cf98fcc89-vdqjl\" (UID: \"3853758e-3847-4715-8b8a-85022e708c75\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vdqjl" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.556860 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvrk4\" (UniqueName: \"kubernetes.io/projected/f84b05bb-fe6d-4dcb-9501-375683557250-kube-api-access-gvrk4\") pod \"cert-manager-webhook-687f57d79b-lb2pj\" (UID: \"f84b05bb-fe6d-4dcb-9501-375683557250\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lb2pj" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.556925 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr4lx\" (UniqueName: \"kubernetes.io/projected/e98f8274-774b-446d-ae13-e7e7d4697463-kube-api-access-jr4lx\") pod \"cert-manager-858654f9db-29s2s\" (UID: \"e98f8274-774b-446d-ae13-e7e7d4697463\") " pod="cert-manager/cert-manager-858654f9db-29s2s" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.556994 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6m8v\" (UniqueName: \"kubernetes.io/projected/3853758e-3847-4715-8b8a-85022e708c75-kube-api-access-z6m8v\") pod \"cert-manager-cainjector-cf98fcc89-vdqjl\" (UID: \"3853758e-3847-4715-8b8a-85022e708c75\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vdqjl" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.597545 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6m8v\" (UniqueName: \"kubernetes.io/projected/3853758e-3847-4715-8b8a-85022e708c75-kube-api-access-z6m8v\") pod \"cert-manager-cainjector-cf98fcc89-vdqjl\" (UID: \"3853758e-3847-4715-8b8a-85022e708c75\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vdqjl" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.658412 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvrk4\" (UniqueName: \"kubernetes.io/projected/f84b05bb-fe6d-4dcb-9501-375683557250-kube-api-access-gvrk4\") pod \"cert-manager-webhook-687f57d79b-lb2pj\" (UID: \"f84b05bb-fe6d-4dcb-9501-375683557250\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lb2pj" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.658789 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr4lx\" (UniqueName: \"kubernetes.io/projected/e98f8274-774b-446d-ae13-e7e7d4697463-kube-api-access-jr4lx\") pod \"cert-manager-858654f9db-29s2s\" (UID: \"e98f8274-774b-446d-ae13-e7e7d4697463\") " pod="cert-manager/cert-manager-858654f9db-29s2s" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.662233 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vdqjl" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.676970 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvrk4\" (UniqueName: \"kubernetes.io/projected/f84b05bb-fe6d-4dcb-9501-375683557250-kube-api-access-gvrk4\") pod \"cert-manager-webhook-687f57d79b-lb2pj\" (UID: \"f84b05bb-fe6d-4dcb-9501-375683557250\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lb2pj" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.680642 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr4lx\" (UniqueName: \"kubernetes.io/projected/e98f8274-774b-446d-ae13-e7e7d4697463-kube-api-access-jr4lx\") pod \"cert-manager-858654f9db-29s2s\" (UID: \"e98f8274-774b-446d-ae13-e7e7d4697463\") " pod="cert-manager/cert-manager-858654f9db-29s2s" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.691638 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-lb2pj" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.911703 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vdqjl"] Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.922288 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.978186 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-29s2s" Feb 03 12:20:00 crc kubenswrapper[4820]: I0203 12:20:00.982385 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lb2pj"] Feb 03 12:20:01 crc kubenswrapper[4820]: I0203 12:20:01.154906 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-29s2s"] Feb 03 12:20:01 crc kubenswrapper[4820]: W0203 12:20:01.159225 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode98f8274_774b_446d_ae13_e7e7d4697463.slice/crio-e71e89f423ce00c54b62cca76eee93c2bec6e83bc24a0898a5ed53436dce7694 WatchSource:0}: Error finding container e71e89f423ce00c54b62cca76eee93c2bec6e83bc24a0898a5ed53436dce7694: Status 404 returned error can't find the container with id e71e89f423ce00c54b62cca76eee93c2bec6e83bc24a0898a5ed53436dce7694 Feb 03 12:20:01 crc kubenswrapper[4820]: I0203 12:20:01.162233 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-lb2pj" event={"ID":"f84b05bb-fe6d-4dcb-9501-375683557250","Type":"ContainerStarted","Data":"4663b21bf71d41a27b316f9bc9283a07a401b42294ac1097e3a0b46bd9744cb0"} Feb 03 12:20:01 crc kubenswrapper[4820]: I0203 12:20:01.163304 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vdqjl" event={"ID":"3853758e-3847-4715-8b8a-85022e708c75","Type":"ContainerStarted","Data":"72580b2be8a03882d00c8de576703fb7a09a571d94cebd9d9490d736b51e83b7"} Feb 03 12:20:02 crc kubenswrapper[4820]: I0203 12:20:02.170340 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-29s2s" event={"ID":"e98f8274-774b-446d-ae13-e7e7d4697463","Type":"ContainerStarted","Data":"e71e89f423ce00c54b62cca76eee93c2bec6e83bc24a0898a5ed53436dce7694"} Feb 03 12:20:04 crc kubenswrapper[4820]: I0203 12:20:04.183322 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vdqjl" event={"ID":"3853758e-3847-4715-8b8a-85022e708c75","Type":"ContainerStarted","Data":"ef787c49e39464e0ab4d93092769cd2e4388c2b045e39497947010567c1506d3"} Feb 03 12:20:04 crc kubenswrapper[4820]: I0203 12:20:04.202181 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vdqjl" podStartSLOduration=1.9019476640000001 podStartE2EDuration="4.202165787s" podCreationTimestamp="2026-02-03 12:20:00 +0000 UTC" firstStartedPulling="2026-02-03 12:20:00.922108149 +0000 UTC m=+918.445184013" lastFinishedPulling="2026-02-03 12:20:03.222326272 +0000 UTC m=+920.745402136" observedRunningTime="2026-02-03 12:20:04.199155186 +0000 UTC m=+921.722231050" watchObservedRunningTime="2026-02-03 12:20:04.202165787 +0000 UTC m=+921.725241651" Feb 03 12:20:06 crc kubenswrapper[4820]: I0203 12:20:06.197136 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-lb2pj" event={"ID":"f84b05bb-fe6d-4dcb-9501-375683557250","Type":"ContainerStarted","Data":"aa7862b83f21a399ec277aee58993f2f88b06803b8dcffbc97b41d51d75f7e20"} Feb 03 12:20:06 crc kubenswrapper[4820]: I0203 12:20:06.197475 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-lb2pj" Feb 03 12:20:06 crc kubenswrapper[4820]: I0203 12:20:06.199007 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-29s2s" event={"ID":"e98f8274-774b-446d-ae13-e7e7d4697463","Type":"ContainerStarted","Data":"b1fa560db22cd1d17c8bebe916fbcab741b74b8d0a13582d52bd84d1315c941e"} Feb 03 12:20:06 crc kubenswrapper[4820]: I0203 12:20:06.219567 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-lb2pj" podStartSLOduration=1.8212962529999999 podStartE2EDuration="6.219544423s" podCreationTimestamp="2026-02-03 12:20:00 +0000 UTC" firstStartedPulling="2026-02-03 12:20:00.981242494 +0000 UTC m=+918.504318358" lastFinishedPulling="2026-02-03 12:20:05.379490664 +0000 UTC m=+922.902566528" observedRunningTime="2026-02-03 12:20:06.216864382 +0000 UTC m=+923.739940256" watchObservedRunningTime="2026-02-03 12:20:06.219544423 +0000 UTC m=+923.742620287" Feb 03 12:20:06 crc kubenswrapper[4820]: I0203 12:20:06.238988 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-29s2s" podStartSLOduration=2.015809588 podStartE2EDuration="6.238962071s" podCreationTimestamp="2026-02-03 12:20:00 +0000 UTC" firstStartedPulling="2026-02-03 12:20:01.161109228 +0000 UTC m=+918.684185092" lastFinishedPulling="2026-02-03 12:20:05.384261711 +0000 UTC m=+922.907337575" observedRunningTime="2026-02-03 12:20:06.234543902 +0000 UTC m=+923.757619756" watchObservedRunningTime="2026-02-03 12:20:06.238962071 +0000 UTC m=+923.762037935" Feb 03 12:20:10 crc kubenswrapper[4820]: I0203 12:20:10.695551 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-lb2pj" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.069737 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-75mwm"] Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.070398 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovn-controller" containerID="cri-o://b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7" gracePeriod=30 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.070449 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="nbdb" containerID="cri-o://4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec" gracePeriod=30 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.070532 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="sbdb" containerID="cri-o://4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f" gracePeriod=30 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.070552 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovn-acl-logging" containerID="cri-o://7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4" gracePeriod=30 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.070552 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="kube-rbac-proxy-node" containerID="cri-o://c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76" gracePeriod=30 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.070608 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c" gracePeriod=30 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.070647 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="northd" containerID="cri-o://0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f" gracePeriod=30 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.120781 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" containerID="cri-o://b06b939e5a22068c297bee2452a40d7c0443725bc4b8fdf70cba52f1273121fa" gracePeriod=30 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.325873 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovnkube-controller/3.log" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.332237 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovn-acl-logging/0.log" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333191 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovn-controller/0.log" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333730 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerID="b06b939e5a22068c297bee2452a40d7c0443725bc4b8fdf70cba52f1273121fa" exitCode=0 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333766 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerID="4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec" exitCode=0 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333775 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerID="0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f" exitCode=0 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333782 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerID="f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c" exitCode=0 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333789 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerID="c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76" exitCode=0 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333797 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerID="7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4" exitCode=143 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333805 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerID="b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7" exitCode=143 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333863 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerDied","Data":"b06b939e5a22068c297bee2452a40d7c0443725bc4b8fdf70cba52f1273121fa"} Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333908 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerDied","Data":"4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec"} Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333921 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerDied","Data":"0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f"} Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333931 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerDied","Data":"f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c"} Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333940 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerDied","Data":"c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76"} Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333951 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerDied","Data":"7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4"} Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333962 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerDied","Data":"b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7"} Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.333978 4820 scope.go:117] "RemoveContainer" containerID="9298001c19eb268d0ade02c0ee6d5f802cef36d79656754fcf76427fde0706fe" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.338152 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkfwm_c6da6dd5-2847-482b-adc1-d82ead0af3e9/kube-multus/2.log" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.338974 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkfwm_c6da6dd5-2847-482b-adc1-d82ead0af3e9/kube-multus/1.log" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.339034 4820 generic.go:334] "Generic (PLEG): container finished" podID="c6da6dd5-2847-482b-adc1-d82ead0af3e9" containerID="db3333ec20d0d6dca8a643ef39757315542b773403d9de56fef33e73a57332a4" exitCode=2 Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.339088 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkfwm" event={"ID":"c6da6dd5-2847-482b-adc1-d82ead0af3e9","Type":"ContainerDied","Data":"db3333ec20d0d6dca8a643ef39757315542b773403d9de56fef33e73a57332a4"} Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.339766 4820 scope.go:117] "RemoveContainer" containerID="db3333ec20d0d6dca8a643ef39757315542b773403d9de56fef33e73a57332a4" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.404009 4820 scope.go:117] "RemoveContainer" containerID="7340b2bec2b0d79865501f1917315f355972bcfc92d098978899c57df3f93454" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.418459 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovn-acl-logging/0.log" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.418913 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovn-controller/0.log" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.419250 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.494308 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9nz9t"] Feb 03 12:20:24 crc kubenswrapper[4820]: E0203 12:20:24.494841 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.494854 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: E0203 12:20:24.494863 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.494869 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: E0203 12:20:24.494876 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovn-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.494898 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovn-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: E0203 12:20:24.494908 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="kube-rbac-proxy-node" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.494916 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="kube-rbac-proxy-node" Feb 03 12:20:24 crc kubenswrapper[4820]: E0203 12:20:24.494927 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="sbdb" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.494932 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="sbdb" Feb 03 12:20:24 crc kubenswrapper[4820]: E0203 12:20:24.494943 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="nbdb" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.494953 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="nbdb" Feb 03 12:20:24 crc kubenswrapper[4820]: E0203 12:20:24.494970 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovn-acl-logging" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.494978 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovn-acl-logging" Feb 03 12:20:24 crc kubenswrapper[4820]: E0203 12:20:24.494991 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="kubecfg-setup" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495001 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="kubecfg-setup" Feb 03 12:20:24 crc kubenswrapper[4820]: E0203 12:20:24.495013 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495019 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: E0203 12:20:24.495026 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495033 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: E0203 12:20:24.495046 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495052 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 12:20:24 crc kubenswrapper[4820]: E0203 12:20:24.495059 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="northd" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495064 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="northd" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495155 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="sbdb" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495162 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495172 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495179 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495187 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495194 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="northd" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495201 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovn-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495208 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovn-acl-logging" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495214 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495238 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="kube-rbac-proxy-node" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495245 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="nbdb" Feb 03 12:20:24 crc kubenswrapper[4820]: E0203 12:20:24.495398 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495414 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.495515 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerName="ovnkube-controller" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.497492 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.571062 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-run-netns\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.571123 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovnkube-config\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.571149 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-run-ovn-kubernetes\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.571164 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-cni-netd\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.571224 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.571311 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-systemd\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.571411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.571471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.571645 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572063 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-var-lib-openvswitch\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572641 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-kubelet\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572685 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovn-node-metrics-cert\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572712 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-node-log\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572746 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-openvswitch\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572764 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-cni-bin\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572776 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572831 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572833 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-node-log" (OuterVolumeSpecName: "node-log") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572788 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-etc-openvswitch\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572860 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572877 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-log-socket\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572913 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572923 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-env-overrides\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572084 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.572999 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk788\" (UniqueName: \"kubernetes.io/projected/cf99e305-aa5b-4171-94f6-1e64f20414dd-kube-api-access-nk788\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.573018 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-log-socket" (OuterVolumeSpecName: "log-socket") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.573045 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-slash\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.573074 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-slash" (OuterVolumeSpecName: "host-slash") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.573156 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.573201 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.573245 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovnkube-script-lib\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.573271 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-ovn\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.573303 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.573316 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-systemd-units\") pod \"cf99e305-aa5b-4171-94f6-1e64f20414dd\" (UID: \"cf99e305-aa5b-4171-94f6-1e64f20414dd\") " Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.573340 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.573493 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.573775 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.573678 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-run-openvswitch\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.573957 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec715e76-5b28-4915-b5ff-5f0c6f69179d-ovnkube-config\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574077 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-var-lib-openvswitch\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574138 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-kubelet\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574157 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-slash\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574238 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-run-ovn\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574327 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec715e76-5b28-4915-b5ff-5f0c6f69179d-env-overrides\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574473 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec715e76-5b28-4915-b5ff-5f0c6f69179d-ovn-node-metrics-cert\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574584 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-run-systemd\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574606 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-cni-netd\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574681 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-cni-bin\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574756 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs7zb\" (UniqueName: \"kubernetes.io/projected/ec715e76-5b28-4915-b5ff-5f0c6f69179d-kube-api-access-cs7zb\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ec715e76-5b28-4915-b5ff-5f0c6f69179d-ovnkube-script-lib\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574841 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-etc-openvswitch\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574911 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574963 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-run-netns\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.574990 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-run-ovn-kubernetes\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575012 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-systemd-units\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575036 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-node-log\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575064 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-log-socket\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575133 4820 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575145 4820 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575155 4820 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575165 4820 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-log-socket\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575174 4820 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575183 4820 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-slash\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575193 4820 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575216 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575227 4820 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575236 4820 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575255 4820 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575264 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575273 4820 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575283 4820 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575292 4820 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575301 4820 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.575312 4820 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-node-log\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.577681 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf99e305-aa5b-4171-94f6-1e64f20414dd-kube-api-access-nk788" (OuterVolumeSpecName: "kube-api-access-nk788") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "kube-api-access-nk788". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.577802 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.585484 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cf99e305-aa5b-4171-94f6-1e64f20414dd" (UID: "cf99e305-aa5b-4171-94f6-1e64f20414dd"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.676625 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs7zb\" (UniqueName: \"kubernetes.io/projected/ec715e76-5b28-4915-b5ff-5f0c6f69179d-kube-api-access-cs7zb\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.676752 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ec715e76-5b28-4915-b5ff-5f0c6f69179d-ovnkube-script-lib\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.676780 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-etc-openvswitch\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.676819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.676848 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-run-netns\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677019 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-run-ovn-kubernetes\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677054 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-systemd-units\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677081 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-node-log\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677108 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-log-socket\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677114 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-run-netns\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677135 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-run-openvswitch\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677178 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec715e76-5b28-4915-b5ff-5f0c6f69179d-ovnkube-config\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677185 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-run-openvswitch\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677208 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-var-lib-openvswitch\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677227 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-kubelet\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677236 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-run-ovn-kubernetes\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677242 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-slash\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677262 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-slash\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677287 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-etc-openvswitch\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677290 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-run-ovn\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677310 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677324 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec715e76-5b28-4915-b5ff-5f0c6f69179d-env-overrides\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677364 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec715e76-5b28-4915-b5ff-5f0c6f69179d-ovn-node-metrics-cert\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677389 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-run-systemd\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677412 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-cni-netd\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677448 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-cni-bin\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677498 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk788\" (UniqueName: \"kubernetes.io/projected/cf99e305-aa5b-4171-94f6-1e64f20414dd-kube-api-access-nk788\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677514 4820 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf99e305-aa5b-4171-94f6-1e64f20414dd-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677527 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf99e305-aa5b-4171-94f6-1e64f20414dd-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677561 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-cni-bin\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677594 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-systemd-units\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677629 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-node-log\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677659 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-log-socket\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.677689 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-run-ovn\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.678025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ec715e76-5b28-4915-b5ff-5f0c6f69179d-ovnkube-script-lib\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.678059 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-var-lib-openvswitch\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.678034 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec715e76-5b28-4915-b5ff-5f0c6f69179d-ovnkube-config\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.678093 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-kubelet\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.678109 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-run-systemd\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.678176 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec715e76-5b28-4915-b5ff-5f0c6f69179d-host-cni-netd\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.678381 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec715e76-5b28-4915-b5ff-5f0c6f69179d-env-overrides\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.681560 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec715e76-5b28-4915-b5ff-5f0c6f69179d-ovn-node-metrics-cert\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.695086 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs7zb\" (UniqueName: \"kubernetes.io/projected/ec715e76-5b28-4915-b5ff-5f0c6f69179d-kube-api-access-cs7zb\") pod \"ovnkube-node-9nz9t\" (UID: \"ec715e76-5b28-4915-b5ff-5f0c6f69179d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: I0203 12:20:24.816846 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:24 crc kubenswrapper[4820]: W0203 12:20:24.842148 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec715e76_5b28_4915_b5ff_5f0c6f69179d.slice/crio-189204c74fa75ecb905d706ae4c490304c23b7f0386d02de103dd02ba71f2ff3 WatchSource:0}: Error finding container 189204c74fa75ecb905d706ae4c490304c23b7f0386d02de103dd02ba71f2ff3: Status 404 returned error can't find the container with id 189204c74fa75ecb905d706ae4c490304c23b7f0386d02de103dd02ba71f2ff3 Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.353094 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovn-acl-logging/0.log" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.353649 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75mwm_cf99e305-aa5b-4171-94f6-1e64f20414dd/ovn-controller/0.log" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.354375 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf99e305-aa5b-4171-94f6-1e64f20414dd" containerID="4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f" exitCode=0 Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.354469 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerDied","Data":"4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f"} Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.354515 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.354531 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75mwm" event={"ID":"cf99e305-aa5b-4171-94f6-1e64f20414dd","Type":"ContainerDied","Data":"56dd112bfcb1d6294c5e1231ef6bf898cf47881a19f6db9df36e8d4bb8cb4bd0"} Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.354557 4820 scope.go:117] "RemoveContainer" containerID="b06b939e5a22068c297bee2452a40d7c0443725bc4b8fdf70cba52f1273121fa" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.356929 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkfwm_c6da6dd5-2847-482b-adc1-d82ead0af3e9/kube-multus/2.log" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.357015 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkfwm" event={"ID":"c6da6dd5-2847-482b-adc1-d82ead0af3e9","Type":"ContainerStarted","Data":"68532794313a31f12e3efa1b1cef42044fe66ab844c7aabb5b46b0f7b057bc37"} Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.359875 4820 generic.go:334] "Generic (PLEG): container finished" podID="ec715e76-5b28-4915-b5ff-5f0c6f69179d" containerID="324f538b2a523e4a5b1e1a3c7e86374d6422155fa967126a9180cbb203de9b3e" exitCode=0 Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.359917 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" event={"ID":"ec715e76-5b28-4915-b5ff-5f0c6f69179d","Type":"ContainerDied","Data":"324f538b2a523e4a5b1e1a3c7e86374d6422155fa967126a9180cbb203de9b3e"} Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.359957 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" event={"ID":"ec715e76-5b28-4915-b5ff-5f0c6f69179d","Type":"ContainerStarted","Data":"189204c74fa75ecb905d706ae4c490304c23b7f0386d02de103dd02ba71f2ff3"} Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.373240 4820 scope.go:117] "RemoveContainer" containerID="4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.390025 4820 scope.go:117] "RemoveContainer" containerID="4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.432071 4820 scope.go:117] "RemoveContainer" containerID="0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.461625 4820 scope.go:117] "RemoveContainer" containerID="f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.470202 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-75mwm"] Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.476210 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-75mwm"] Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.480684 4820 scope.go:117] "RemoveContainer" containerID="c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.493881 4820 scope.go:117] "RemoveContainer" containerID="7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.509139 4820 scope.go:117] "RemoveContainer" containerID="b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.527913 4820 scope.go:117] "RemoveContainer" containerID="707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.565824 4820 scope.go:117] "RemoveContainer" containerID="b06b939e5a22068c297bee2452a40d7c0443725bc4b8fdf70cba52f1273121fa" Feb 03 12:20:25 crc kubenswrapper[4820]: E0203 12:20:25.566516 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b06b939e5a22068c297bee2452a40d7c0443725bc4b8fdf70cba52f1273121fa\": container with ID starting with b06b939e5a22068c297bee2452a40d7c0443725bc4b8fdf70cba52f1273121fa not found: ID does not exist" containerID="b06b939e5a22068c297bee2452a40d7c0443725bc4b8fdf70cba52f1273121fa" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.566565 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06b939e5a22068c297bee2452a40d7c0443725bc4b8fdf70cba52f1273121fa"} err="failed to get container status \"b06b939e5a22068c297bee2452a40d7c0443725bc4b8fdf70cba52f1273121fa\": rpc error: code = NotFound desc = could not find container \"b06b939e5a22068c297bee2452a40d7c0443725bc4b8fdf70cba52f1273121fa\": container with ID starting with b06b939e5a22068c297bee2452a40d7c0443725bc4b8fdf70cba52f1273121fa not found: ID does not exist" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.566589 4820 scope.go:117] "RemoveContainer" containerID="4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f" Feb 03 12:20:25 crc kubenswrapper[4820]: E0203 12:20:25.567129 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\": container with ID starting with 4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f not found: ID does not exist" containerID="4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.567175 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f"} err="failed to get container status \"4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\": rpc error: code = NotFound desc = could not find container \"4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f\": container with ID starting with 4c9f050db727793c24f2f8bf3b387813e313f3b764be003c6849947f98a61d4f not found: ID does not exist" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.567209 4820 scope.go:117] "RemoveContainer" containerID="4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec" Feb 03 12:20:25 crc kubenswrapper[4820]: E0203 12:20:25.567533 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\": container with ID starting with 4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec not found: ID does not exist" containerID="4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.567562 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec"} err="failed to get container status \"4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\": rpc error: code = NotFound desc = could not find container \"4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec\": container with ID starting with 4e8e47b3a2a50b4eccff391ccb45bf979a5fe46983b014cb03f465db928172ec not found: ID does not exist" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.567580 4820 scope.go:117] "RemoveContainer" containerID="0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f" Feb 03 12:20:25 crc kubenswrapper[4820]: E0203 12:20:25.568104 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\": container with ID starting with 0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f not found: ID does not exist" containerID="0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.568144 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f"} err="failed to get container status \"0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\": rpc error: code = NotFound desc = could not find container \"0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f\": container with ID starting with 0defe98bc51b1f567900f2d90138ceec07b4cae414fd3b448df503dba9a9460f not found: ID does not exist" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.568196 4820 scope.go:117] "RemoveContainer" containerID="f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c" Feb 03 12:20:25 crc kubenswrapper[4820]: E0203 12:20:25.568543 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\": container with ID starting with f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c not found: ID does not exist" containerID="f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.568578 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c"} err="failed to get container status \"f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\": rpc error: code = NotFound desc = could not find container \"f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c\": container with ID starting with f5b0072b2eccb422316702a70fb54f21fcd020e1627a625d1a7c63d46b71e48c not found: ID does not exist" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.568598 4820 scope.go:117] "RemoveContainer" containerID="c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76" Feb 03 12:20:25 crc kubenswrapper[4820]: E0203 12:20:25.568843 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\": container with ID starting with c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76 not found: ID does not exist" containerID="c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.568867 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76"} err="failed to get container status \"c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\": rpc error: code = NotFound desc = could not find container \"c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76\": container with ID starting with c998566abb62f49bb7527ea2c4b023c87a9e3db89ff5fa25b68896424924ea76 not found: ID does not exist" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.568882 4820 scope.go:117] "RemoveContainer" containerID="7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4" Feb 03 12:20:25 crc kubenswrapper[4820]: E0203 12:20:25.569239 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\": container with ID starting with 7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4 not found: ID does not exist" containerID="7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.569262 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4"} err="failed to get container status \"7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\": rpc error: code = NotFound desc = could not find container \"7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4\": container with ID starting with 7a67f31bdc8d6710cfbca063e2a811d4fbace42a9d3c3a53fba0f47c30ee00e4 not found: ID does not exist" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.569279 4820 scope.go:117] "RemoveContainer" containerID="b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7" Feb 03 12:20:25 crc kubenswrapper[4820]: E0203 12:20:25.569536 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\": container with ID starting with b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7 not found: ID does not exist" containerID="b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.569562 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7"} err="failed to get container status \"b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\": rpc error: code = NotFound desc = could not find container \"b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7\": container with ID starting with b8b54903e9478f26cfa1770a0e06298325e9b9f198a3aeaa8ed6dcc192c0cdb7 not found: ID does not exist" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.569578 4820 scope.go:117] "RemoveContainer" containerID="707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3" Feb 03 12:20:25 crc kubenswrapper[4820]: E0203 12:20:25.569817 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\": container with ID starting with 707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3 not found: ID does not exist" containerID="707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3" Feb 03 12:20:25 crc kubenswrapper[4820]: I0203 12:20:25.569844 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3"} err="failed to get container status \"707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\": rpc error: code = NotFound desc = could not find container \"707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3\": container with ID starting with 707b3fd62e0f11f4f22464594158f7f1bbcbb9b37c1e49c70e2a0d1a6b5b07c3 not found: ID does not exist" Feb 03 12:20:26 crc kubenswrapper[4820]: I0203 12:20:26.372974 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" event={"ID":"ec715e76-5b28-4915-b5ff-5f0c6f69179d","Type":"ContainerStarted","Data":"7dcc01e8bc17720070081db27d16012a9ffa7306f67391b72721ad72c436d22a"} Feb 03 12:20:26 crc kubenswrapper[4820]: I0203 12:20:26.373529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" event={"ID":"ec715e76-5b28-4915-b5ff-5f0c6f69179d","Type":"ContainerStarted","Data":"5415d452e27b3c06d735027958f2d9638180dcfef39dc10a3e0cfd0d1e6e2ece"} Feb 03 12:20:26 crc kubenswrapper[4820]: I0203 12:20:26.373548 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" event={"ID":"ec715e76-5b28-4915-b5ff-5f0c6f69179d","Type":"ContainerStarted","Data":"72fc6228774b79f13e2ac91a7c735cd1f032b6dfcead4c2d22e5b33fbba3b20d"} Feb 03 12:20:26 crc kubenswrapper[4820]: I0203 12:20:26.373558 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" event={"ID":"ec715e76-5b28-4915-b5ff-5f0c6f69179d","Type":"ContainerStarted","Data":"694c3ef828bea45e1233ba2f889d6cbbdda800cbe6e2880ccce57892951e3d0c"} Feb 03 12:20:26 crc kubenswrapper[4820]: I0203 12:20:26.373576 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" event={"ID":"ec715e76-5b28-4915-b5ff-5f0c6f69179d","Type":"ContainerStarted","Data":"e0a3cad5106962a0dd95b045c71f6a66be91d86ffa6c78bce6652f8ab6a85328"} Feb 03 12:20:26 crc kubenswrapper[4820]: I0203 12:20:26.373605 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" event={"ID":"ec715e76-5b28-4915-b5ff-5f0c6f69179d","Type":"ContainerStarted","Data":"ff743be47352772c95f274b0f6cc47df0d5cf1807ee1369194c30830fa9a5ae3"} Feb 03 12:20:27 crc kubenswrapper[4820]: I0203 12:20:27.151703 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf99e305-aa5b-4171-94f6-1e64f20414dd" path="/var/lib/kubelet/pods/cf99e305-aa5b-4171-94f6-1e64f20414dd/volumes" Feb 03 12:20:28 crc kubenswrapper[4820]: I0203 12:20:28.387816 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" event={"ID":"ec715e76-5b28-4915-b5ff-5f0c6f69179d","Type":"ContainerStarted","Data":"57742ce889400c6ba43196b1a67545e38148ca1e3b95304467a59339c0bc8e7a"} Feb 03 12:20:31 crc kubenswrapper[4820]: I0203 12:20:31.365750 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:20:31 crc kubenswrapper[4820]: I0203 12:20:31.366309 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:20:31 crc kubenswrapper[4820]: I0203 12:20:31.407416 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" event={"ID":"ec715e76-5b28-4915-b5ff-5f0c6f69179d","Type":"ContainerStarted","Data":"f35e66beb1094afa1b4680568d8cc56adb29130aa64a3538db6d21f77cd4c5de"} Feb 03 12:20:31 crc kubenswrapper[4820]: I0203 12:20:31.407719 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:31 crc kubenswrapper[4820]: I0203 12:20:31.407768 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:31 crc kubenswrapper[4820]: I0203 12:20:31.407778 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:31 crc kubenswrapper[4820]: I0203 12:20:31.437398 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:31 crc kubenswrapper[4820]: I0203 12:20:31.439542 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:31 crc kubenswrapper[4820]: I0203 12:20:31.448237 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" podStartSLOduration=7.448220011 podStartE2EDuration="7.448220011s" podCreationTimestamp="2026-02-03 12:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:20:31.440148597 +0000 UTC m=+948.963224481" watchObservedRunningTime="2026-02-03 12:20:31.448220011 +0000 UTC m=+948.971295865" Feb 03 12:20:45 crc kubenswrapper[4820]: I0203 12:20:45.684416 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt"] Feb 03 12:20:45 crc kubenswrapper[4820]: I0203 12:20:45.686082 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" Feb 03 12:20:45 crc kubenswrapper[4820]: I0203 12:20:45.688498 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 12:20:45 crc kubenswrapper[4820]: I0203 12:20:45.697935 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt"] Feb 03 12:20:45 crc kubenswrapper[4820]: I0203 12:20:45.870814 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cptkl\" (UniqueName: \"kubernetes.io/projected/73a0ef2f-bdcb-4042-813c-597bd2694e20-kube-api-access-cptkl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt\" (UID: \"73a0ef2f-bdcb-4042-813c-597bd2694e20\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" Feb 03 12:20:45 crc kubenswrapper[4820]: I0203 12:20:45.871147 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73a0ef2f-bdcb-4042-813c-597bd2694e20-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt\" (UID: \"73a0ef2f-bdcb-4042-813c-597bd2694e20\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" Feb 03 12:20:45 crc kubenswrapper[4820]: I0203 12:20:45.871258 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73a0ef2f-bdcb-4042-813c-597bd2694e20-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt\" (UID: \"73a0ef2f-bdcb-4042-813c-597bd2694e20\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" Feb 03 12:20:45 crc kubenswrapper[4820]: I0203 12:20:45.972265 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73a0ef2f-bdcb-4042-813c-597bd2694e20-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt\" (UID: \"73a0ef2f-bdcb-4042-813c-597bd2694e20\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" Feb 03 12:20:45 crc kubenswrapper[4820]: I0203 12:20:45.972359 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73a0ef2f-bdcb-4042-813c-597bd2694e20-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt\" (UID: \"73a0ef2f-bdcb-4042-813c-597bd2694e20\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" Feb 03 12:20:45 crc kubenswrapper[4820]: I0203 12:20:45.972409 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cptkl\" (UniqueName: \"kubernetes.io/projected/73a0ef2f-bdcb-4042-813c-597bd2694e20-kube-api-access-cptkl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt\" (UID: \"73a0ef2f-bdcb-4042-813c-597bd2694e20\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" Feb 03 12:20:45 crc kubenswrapper[4820]: I0203 12:20:45.973054 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73a0ef2f-bdcb-4042-813c-597bd2694e20-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt\" (UID: \"73a0ef2f-bdcb-4042-813c-597bd2694e20\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" Feb 03 12:20:45 crc kubenswrapper[4820]: I0203 12:20:45.975029 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73a0ef2f-bdcb-4042-813c-597bd2694e20-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt\" (UID: \"73a0ef2f-bdcb-4042-813c-597bd2694e20\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" Feb 03 12:20:45 crc kubenswrapper[4820]: I0203 12:20:45.993481 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cptkl\" (UniqueName: \"kubernetes.io/projected/73a0ef2f-bdcb-4042-813c-597bd2694e20-kube-api-access-cptkl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt\" (UID: \"73a0ef2f-bdcb-4042-813c-597bd2694e20\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" Feb 03 12:20:46 crc kubenswrapper[4820]: I0203 12:20:46.004601 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" Feb 03 12:20:46 crc kubenswrapper[4820]: I0203 12:20:46.440331 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt"] Feb 03 12:20:46 crc kubenswrapper[4820]: I0203 12:20:46.499942 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" event={"ID":"73a0ef2f-bdcb-4042-813c-597bd2694e20","Type":"ContainerStarted","Data":"097af6c99f443f79ee7abaa630574fc9a9507f2c64edba075aadd27eb0b0f838"} Feb 03 12:20:47 crc kubenswrapper[4820]: I0203 12:20:47.506544 4820 generic.go:334] "Generic (PLEG): container finished" podID="73a0ef2f-bdcb-4042-813c-597bd2694e20" containerID="f2cb54b6a168b4a0d992d879850df7a1aa4213606fb46c5274cd97cc4face40c" exitCode=0 Feb 03 12:20:47 crc kubenswrapper[4820]: I0203 12:20:47.506590 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" event={"ID":"73a0ef2f-bdcb-4042-813c-597bd2694e20","Type":"ContainerDied","Data":"f2cb54b6a168b4a0d992d879850df7a1aa4213606fb46c5274cd97cc4face40c"} Feb 03 12:20:48 crc kubenswrapper[4820]: I0203 12:20:48.036136 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p5bj4"] Feb 03 12:20:48 crc kubenswrapper[4820]: I0203 12:20:48.037701 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:20:48 crc kubenswrapper[4820]: I0203 12:20:48.045425 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p5bj4"] Feb 03 12:20:48 crc kubenswrapper[4820]: I0203 12:20:48.211570 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccvg7\" (UniqueName: \"kubernetes.io/projected/29aaf84c-c42d-486d-ab0e-13b63f35dcca-kube-api-access-ccvg7\") pod \"redhat-operators-p5bj4\" (UID: \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\") " pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:20:48 crc kubenswrapper[4820]: I0203 12:20:48.211641 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29aaf84c-c42d-486d-ab0e-13b63f35dcca-utilities\") pod \"redhat-operators-p5bj4\" (UID: \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\") " pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:20:48 crc kubenswrapper[4820]: I0203 12:20:48.211673 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29aaf84c-c42d-486d-ab0e-13b63f35dcca-catalog-content\") pod \"redhat-operators-p5bj4\" (UID: \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\") " pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:20:48 crc kubenswrapper[4820]: I0203 12:20:48.364302 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29aaf84c-c42d-486d-ab0e-13b63f35dcca-utilities\") pod \"redhat-operators-p5bj4\" (UID: \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\") " pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:20:48 crc kubenswrapper[4820]: I0203 12:20:48.364380 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29aaf84c-c42d-486d-ab0e-13b63f35dcca-catalog-content\") pod \"redhat-operators-p5bj4\" (UID: \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\") " pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:20:48 crc kubenswrapper[4820]: I0203 12:20:48.364448 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccvg7\" (UniqueName: \"kubernetes.io/projected/29aaf84c-c42d-486d-ab0e-13b63f35dcca-kube-api-access-ccvg7\") pod \"redhat-operators-p5bj4\" (UID: \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\") " pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:20:48 crc kubenswrapper[4820]: I0203 12:20:48.365278 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29aaf84c-c42d-486d-ab0e-13b63f35dcca-utilities\") pod \"redhat-operators-p5bj4\" (UID: \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\") " pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:20:48 crc kubenswrapper[4820]: I0203 12:20:48.365365 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29aaf84c-c42d-486d-ab0e-13b63f35dcca-catalog-content\") pod \"redhat-operators-p5bj4\" (UID: \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\") " pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:20:48 crc kubenswrapper[4820]: I0203 12:20:48.407401 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccvg7\" (UniqueName: \"kubernetes.io/projected/29aaf84c-c42d-486d-ab0e-13b63f35dcca-kube-api-access-ccvg7\") pod \"redhat-operators-p5bj4\" (UID: \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\") " pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:20:48 crc kubenswrapper[4820]: I0203 12:20:48.652194 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:20:49 crc kubenswrapper[4820]: I0203 12:20:49.255470 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p5bj4"] Feb 03 12:20:49 crc kubenswrapper[4820]: W0203 12:20:49.275203 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29aaf84c_c42d_486d_ab0e_13b63f35dcca.slice/crio-2d42c956a0a33d77cd03d5f6dcfbe8ced582afa3b495411d3487fa549ff99860 WatchSource:0}: Error finding container 2d42c956a0a33d77cd03d5f6dcfbe8ced582afa3b495411d3487fa549ff99860: Status 404 returned error can't find the container with id 2d42c956a0a33d77cd03d5f6dcfbe8ced582afa3b495411d3487fa549ff99860 Feb 03 12:20:49 crc kubenswrapper[4820]: I0203 12:20:49.526369 4820 generic.go:334] "Generic (PLEG): container finished" podID="29aaf84c-c42d-486d-ab0e-13b63f35dcca" containerID="76145b5ccbb69a3b6fb37ae1180fb80390f5820f567e8ba5b87e26724c3b35ea" exitCode=0 Feb 03 12:20:49 crc kubenswrapper[4820]: I0203 12:20:49.526433 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p5bj4" event={"ID":"29aaf84c-c42d-486d-ab0e-13b63f35dcca","Type":"ContainerDied","Data":"76145b5ccbb69a3b6fb37ae1180fb80390f5820f567e8ba5b87e26724c3b35ea"} Feb 03 12:20:49 crc kubenswrapper[4820]: I0203 12:20:49.526674 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p5bj4" event={"ID":"29aaf84c-c42d-486d-ab0e-13b63f35dcca","Type":"ContainerStarted","Data":"2d42c956a0a33d77cd03d5f6dcfbe8ced582afa3b495411d3487fa549ff99860"} Feb 03 12:20:49 crc kubenswrapper[4820]: I0203 12:20:49.530911 4820 generic.go:334] "Generic (PLEG): container finished" podID="73a0ef2f-bdcb-4042-813c-597bd2694e20" containerID="58df9a15c86521fb190cc9ea12f2cea06d33acf36fa59a2d495dc506fecbb4c5" exitCode=0 Feb 03 12:20:49 crc kubenswrapper[4820]: I0203 12:20:49.531376 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" event={"ID":"73a0ef2f-bdcb-4042-813c-597bd2694e20","Type":"ContainerDied","Data":"58df9a15c86521fb190cc9ea12f2cea06d33acf36fa59a2d495dc506fecbb4c5"} Feb 03 12:20:50 crc kubenswrapper[4820]: I0203 12:20:50.539751 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p5bj4" event={"ID":"29aaf84c-c42d-486d-ab0e-13b63f35dcca","Type":"ContainerStarted","Data":"8f2ca81931e0fca1dc8deba34742ee4915e9ab12f59a7674db3a08dc7b5bc78f"} Feb 03 12:20:50 crc kubenswrapper[4820]: I0203 12:20:50.544131 4820 generic.go:334] "Generic (PLEG): container finished" podID="73a0ef2f-bdcb-4042-813c-597bd2694e20" containerID="6999a2df0942369ccb54ee942c9ec4c8f03b7007776458e6a21ce4a450762ae0" exitCode=0 Feb 03 12:20:50 crc kubenswrapper[4820]: I0203 12:20:50.544208 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" event={"ID":"73a0ef2f-bdcb-4042-813c-597bd2694e20","Type":"ContainerDied","Data":"6999a2df0942369ccb54ee942c9ec4c8f03b7007776458e6a21ce4a450762ae0"} Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.408901 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.448443 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73a0ef2f-bdcb-4042-813c-597bd2694e20-util\") pod \"73a0ef2f-bdcb-4042-813c-597bd2694e20\" (UID: \"73a0ef2f-bdcb-4042-813c-597bd2694e20\") " Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.448591 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cptkl\" (UniqueName: \"kubernetes.io/projected/73a0ef2f-bdcb-4042-813c-597bd2694e20-kube-api-access-cptkl\") pod \"73a0ef2f-bdcb-4042-813c-597bd2694e20\" (UID: \"73a0ef2f-bdcb-4042-813c-597bd2694e20\") " Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.448630 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73a0ef2f-bdcb-4042-813c-597bd2694e20-bundle\") pod \"73a0ef2f-bdcb-4042-813c-597bd2694e20\" (UID: \"73a0ef2f-bdcb-4042-813c-597bd2694e20\") " Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.450832 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73a0ef2f-bdcb-4042-813c-597bd2694e20-bundle" (OuterVolumeSpecName: "bundle") pod "73a0ef2f-bdcb-4042-813c-597bd2694e20" (UID: "73a0ef2f-bdcb-4042-813c-597bd2694e20"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.456736 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73a0ef2f-bdcb-4042-813c-597bd2694e20-kube-api-access-cptkl" (OuterVolumeSpecName: "kube-api-access-cptkl") pod "73a0ef2f-bdcb-4042-813c-597bd2694e20" (UID: "73a0ef2f-bdcb-4042-813c-597bd2694e20"). InnerVolumeSpecName "kube-api-access-cptkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.527902 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73a0ef2f-bdcb-4042-813c-597bd2694e20-util" (OuterVolumeSpecName: "util") pod "73a0ef2f-bdcb-4042-813c-597bd2694e20" (UID: "73a0ef2f-bdcb-4042-813c-597bd2694e20"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.549928 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73a0ef2f-bdcb-4042-813c-597bd2694e20-util\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.549970 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cptkl\" (UniqueName: \"kubernetes.io/projected/73a0ef2f-bdcb-4042-813c-597bd2694e20-kube-api-access-cptkl\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.549981 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73a0ef2f-bdcb-4042-813c-597bd2694e20-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.558560 4820 generic.go:334] "Generic (PLEG): container finished" podID="29aaf84c-c42d-486d-ab0e-13b63f35dcca" containerID="8f2ca81931e0fca1dc8deba34742ee4915e9ab12f59a7674db3a08dc7b5bc78f" exitCode=0 Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.558735 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p5bj4" event={"ID":"29aaf84c-c42d-486d-ab0e-13b63f35dcca","Type":"ContainerDied","Data":"8f2ca81931e0fca1dc8deba34742ee4915e9ab12f59a7674db3a08dc7b5bc78f"} Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.561003 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" event={"ID":"73a0ef2f-bdcb-4042-813c-597bd2694e20","Type":"ContainerDied","Data":"097af6c99f443f79ee7abaa630574fc9a9507f2c64edba075aadd27eb0b0f838"} Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.561030 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="097af6c99f443f79ee7abaa630574fc9a9507f2c64edba075aadd27eb0b0f838" Feb 03 12:20:52 crc kubenswrapper[4820]: I0203 12:20:52.561119 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt" Feb 03 12:20:53 crc kubenswrapper[4820]: I0203 12:20:53.612869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p5bj4" event={"ID":"29aaf84c-c42d-486d-ab0e-13b63f35dcca","Type":"ContainerStarted","Data":"9f680d46e8c25e1a5a916dff9913ac55b736d039d6dcf57f99867bcf4d049f24"} Feb 03 12:20:53 crc kubenswrapper[4820]: I0203 12:20:53.639233 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p5bj4" podStartSLOduration=2.151459792 podStartE2EDuration="5.639200915s" podCreationTimestamp="2026-02-03 12:20:48 +0000 UTC" firstStartedPulling="2026-02-03 12:20:49.528993513 +0000 UTC m=+967.052069377" lastFinishedPulling="2026-02-03 12:20:53.016734636 +0000 UTC m=+970.539810500" observedRunningTime="2026-02-03 12:20:53.636320018 +0000 UTC m=+971.159395872" watchObservedRunningTime="2026-02-03 12:20:53.639200915 +0000 UTC m=+971.162276779" Feb 03 12:20:54 crc kubenswrapper[4820]: I0203 12:20:54.845531 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9nz9t" Feb 03 12:20:58 crc kubenswrapper[4820]: I0203 12:20:58.652930 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:20:58 crc kubenswrapper[4820]: I0203 12:20:58.653326 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:20:59 crc kubenswrapper[4820]: I0203 12:20:59.981388 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p5bj4" podUID="29aaf84c-c42d-486d-ab0e-13b63f35dcca" containerName="registry-server" probeResult="failure" output=< Feb 03 12:20:59 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:20:59 crc kubenswrapper[4820]: > Feb 03 12:21:01 crc kubenswrapper[4820]: I0203 12:21:01.554546 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:21:01 crc kubenswrapper[4820]: I0203 12:21:01.554608 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.516651 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9jzgf"] Feb 03 12:21:07 crc kubenswrapper[4820]: E0203 12:21:07.517279 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73a0ef2f-bdcb-4042-813c-597bd2694e20" containerName="extract" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.517294 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a0ef2f-bdcb-4042-813c-597bd2694e20" containerName="extract" Feb 03 12:21:07 crc kubenswrapper[4820]: E0203 12:21:07.517335 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73a0ef2f-bdcb-4042-813c-597bd2694e20" containerName="util" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.517342 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a0ef2f-bdcb-4042-813c-597bd2694e20" containerName="util" Feb 03 12:21:07 crc kubenswrapper[4820]: E0203 12:21:07.517353 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73a0ef2f-bdcb-4042-813c-597bd2694e20" containerName="pull" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.517361 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a0ef2f-bdcb-4042-813c-597bd2694e20" containerName="pull" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.517493 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="73a0ef2f-bdcb-4042-813c-597bd2694e20" containerName="extract" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.518015 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jzgf" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.519922 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-wczxs" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.520462 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.520764 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.535427 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9jzgf"] Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.577087 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgbc7\" (UniqueName: \"kubernetes.io/projected/c1ad6c2d-5ab9-4904-9426-00ebf486a90d-kube-api-access-xgbc7\") pod \"obo-prometheus-operator-68bc856cb9-9jzgf\" (UID: \"c1ad6c2d-5ab9-4904-9426-00ebf486a90d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jzgf" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.923164 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgbc7\" (UniqueName: \"kubernetes.io/projected/c1ad6c2d-5ab9-4904-9426-00ebf486a90d-kube-api-access-xgbc7\") pod \"obo-prometheus-operator-68bc856cb9-9jzgf\" (UID: \"c1ad6c2d-5ab9-4904-9426-00ebf486a90d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jzgf" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.971386 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m"] Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.977707 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.982619 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.982876 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-4khhp" Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.982991 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m"] Feb 03 12:21:07 crc kubenswrapper[4820]: I0203 12:21:07.983878 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgbc7\" (UniqueName: \"kubernetes.io/projected/c1ad6c2d-5ab9-4904-9426-00ebf486a90d-kube-api-access-xgbc7\") pod \"obo-prometheus-operator-68bc856cb9-9jzgf\" (UID: \"c1ad6c2d-5ab9-4904-9426-00ebf486a90d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jzgf" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.008940 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv"] Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.010165 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.016687 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv"] Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.096034 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lshn6"] Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.097141 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-lshn6" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.099597 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-z6ftl" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.100119 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.131972 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3202dd82-6cc2-478c-9eb1-7810a23ce4bb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv\" (UID: \"3202dd82-6cc2-478c-9eb1-7810a23ce4bb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.132025 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/67c9fe0e-5cc6-469b-90a0-11adfac994cc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m\" (UID: \"67c9fe0e-5cc6-469b-90a0-11adfac994cc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.132091 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3202dd82-6cc2-478c-9eb1-7810a23ce4bb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv\" (UID: \"3202dd82-6cc2-478c-9eb1-7810a23ce4bb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.132142 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/67c9fe0e-5cc6-469b-90a0-11adfac994cc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m\" (UID: \"67c9fe0e-5cc6-469b-90a0-11adfac994cc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.139098 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lshn6"] Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.139577 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jzgf" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.233114 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxdbj\" (UniqueName: \"kubernetes.io/projected/c22a4473-b3ac-4b33-9a20-320b76c330ab-kube-api-access-cxdbj\") pod \"observability-operator-59bdc8b94-lshn6\" (UID: \"c22a4473-b3ac-4b33-9a20-320b76c330ab\") " pod="openshift-operators/observability-operator-59bdc8b94-lshn6" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.233201 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3202dd82-6cc2-478c-9eb1-7810a23ce4bb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv\" (UID: \"3202dd82-6cc2-478c-9eb1-7810a23ce4bb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.233240 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c22a4473-b3ac-4b33-9a20-320b76c330ab-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lshn6\" (UID: \"c22a4473-b3ac-4b33-9a20-320b76c330ab\") " pod="openshift-operators/observability-operator-59bdc8b94-lshn6" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.233276 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/67c9fe0e-5cc6-469b-90a0-11adfac994cc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m\" (UID: \"67c9fe0e-5cc6-469b-90a0-11adfac994cc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.233326 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3202dd82-6cc2-478c-9eb1-7810a23ce4bb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv\" (UID: \"3202dd82-6cc2-478c-9eb1-7810a23ce4bb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.233363 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/67c9fe0e-5cc6-469b-90a0-11adfac994cc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m\" (UID: \"67c9fe0e-5cc6-469b-90a0-11adfac994cc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.240009 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3202dd82-6cc2-478c-9eb1-7810a23ce4bb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv\" (UID: \"3202dd82-6cc2-478c-9eb1-7810a23ce4bb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.241014 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/67c9fe0e-5cc6-469b-90a0-11adfac994cc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m\" (UID: \"67c9fe0e-5cc6-469b-90a0-11adfac994cc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.256664 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/67c9fe0e-5cc6-469b-90a0-11adfac994cc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m\" (UID: \"67c9fe0e-5cc6-469b-90a0-11adfac994cc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.257133 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3202dd82-6cc2-478c-9eb1-7810a23ce4bb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv\" (UID: \"3202dd82-6cc2-478c-9eb1-7810a23ce4bb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.305396 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.325766 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.334686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxdbj\" (UniqueName: \"kubernetes.io/projected/c22a4473-b3ac-4b33-9a20-320b76c330ab-kube-api-access-cxdbj\") pod \"observability-operator-59bdc8b94-lshn6\" (UID: \"c22a4473-b3ac-4b33-9a20-320b76c330ab\") " pod="openshift-operators/observability-operator-59bdc8b94-lshn6" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.334759 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c22a4473-b3ac-4b33-9a20-320b76c330ab-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lshn6\" (UID: \"c22a4473-b3ac-4b33-9a20-320b76c330ab\") " pod="openshift-operators/observability-operator-59bdc8b94-lshn6" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.343401 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c22a4473-b3ac-4b33-9a20-320b76c330ab-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lshn6\" (UID: \"c22a4473-b3ac-4b33-9a20-320b76c330ab\") " pod="openshift-operators/observability-operator-59bdc8b94-lshn6" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.372430 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxdbj\" (UniqueName: \"kubernetes.io/projected/c22a4473-b3ac-4b33-9a20-320b76c330ab-kube-api-access-cxdbj\") pod \"observability-operator-59bdc8b94-lshn6\" (UID: \"c22a4473-b3ac-4b33-9a20-320b76c330ab\") " pod="openshift-operators/observability-operator-59bdc8b94-lshn6" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.449322 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-lshn6" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.481004 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gx6fv"] Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.567952 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.590564 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-jkvtd" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.654690 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gx6fv"] Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.775943 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f0df377-6a2b-4270-974f-3d178cdc47d9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gx6fv\" (UID: \"4f0df377-6a2b-4270-974f-3d178cdc47d9\") " pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.775984 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92dr\" (UniqueName: \"kubernetes.io/projected/4f0df377-6a2b-4270-974f-3d178cdc47d9-kube-api-access-v92dr\") pod \"perses-operator-5bf474d74f-gx6fv\" (UID: \"4f0df377-6a2b-4270-974f-3d178cdc47d9\") " pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" Feb 03 12:21:08 crc kubenswrapper[4820]: I0203 12:21:08.900434 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f0df377-6a2b-4270-974f-3d178cdc47d9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gx6fv\" (UID: \"4f0df377-6a2b-4270-974f-3d178cdc47d9\") " pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" Feb 03 12:21:09 crc kubenswrapper[4820]: I0203 12:21:08.900528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v92dr\" (UniqueName: \"kubernetes.io/projected/4f0df377-6a2b-4270-974f-3d178cdc47d9-kube-api-access-v92dr\") pod \"perses-operator-5bf474d74f-gx6fv\" (UID: \"4f0df377-6a2b-4270-974f-3d178cdc47d9\") " pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" Feb 03 12:21:09 crc kubenswrapper[4820]: I0203 12:21:08.902977 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f0df377-6a2b-4270-974f-3d178cdc47d9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gx6fv\" (UID: \"4f0df377-6a2b-4270-974f-3d178cdc47d9\") " pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" Feb 03 12:21:09 crc kubenswrapper[4820]: I0203 12:21:09.144633 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:21:09 crc kubenswrapper[4820]: I0203 12:21:09.305330 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92dr\" (UniqueName: \"kubernetes.io/projected/4f0df377-6a2b-4270-974f-3d178cdc47d9-kube-api-access-v92dr\") pod \"perses-operator-5bf474d74f-gx6fv\" (UID: \"4f0df377-6a2b-4270-974f-3d178cdc47d9\") " pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" Feb 03 12:21:09 crc kubenswrapper[4820]: I0203 12:21:09.406495 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:21:09 crc kubenswrapper[4820]: I0203 12:21:09.548151 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" Feb 03 12:21:09 crc kubenswrapper[4820]: I0203 12:21:09.549476 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p5bj4"] Feb 03 12:21:09 crc kubenswrapper[4820]: I0203 12:21:09.622956 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9jzgf"] Feb 03 12:21:09 crc kubenswrapper[4820]: I0203 12:21:09.699143 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m"] Feb 03 12:21:09 crc kubenswrapper[4820]: W0203 12:21:09.729104 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67c9fe0e_5cc6_469b_90a0_11adfac994cc.slice/crio-23e9c843225258947483f0496f81bdbc0b02bcb3bf21854ef04b59e6042ee856 WatchSource:0}: Error finding container 23e9c843225258947483f0496f81bdbc0b02bcb3bf21854ef04b59e6042ee856: Status 404 returned error can't find the container with id 23e9c843225258947483f0496f81bdbc0b02bcb3bf21854ef04b59e6042ee856 Feb 03 12:21:09 crc kubenswrapper[4820]: I0203 12:21:09.970429 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv"] Feb 03 12:21:09 crc kubenswrapper[4820]: W0203 12:21:09.989276 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3202dd82_6cc2_478c_9eb1_7810a23ce4bb.slice/crio-4f84f6e5a4b6a2b71885d4c4e60f2bca112ff280d2c690fca9c12bbc627a5fe6 WatchSource:0}: Error finding container 4f84f6e5a4b6a2b71885d4c4e60f2bca112ff280d2c690fca9c12bbc627a5fe6: Status 404 returned error can't find the container with id 4f84f6e5a4b6a2b71885d4c4e60f2bca112ff280d2c690fca9c12bbc627a5fe6 Feb 03 12:21:10 crc kubenswrapper[4820]: I0203 12:21:10.001510 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lshn6"] Feb 03 12:21:10 crc kubenswrapper[4820]: W0203 12:21:10.009543 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22a4473_b3ac_4b33_9a20_320b76c330ab.slice/crio-24be4bff4eb1b82d91a499e39e05ebea8819b503fae6e4b5c4d060331b3acaf2 WatchSource:0}: Error finding container 24be4bff4eb1b82d91a499e39e05ebea8819b503fae6e4b5c4d060331b3acaf2: Status 404 returned error can't find the container with id 24be4bff4eb1b82d91a499e39e05ebea8819b503fae6e4b5c4d060331b3acaf2 Feb 03 12:21:10 crc kubenswrapper[4820]: I0203 12:21:10.126321 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-lshn6" event={"ID":"c22a4473-b3ac-4b33-9a20-320b76c330ab","Type":"ContainerStarted","Data":"24be4bff4eb1b82d91a499e39e05ebea8819b503fae6e4b5c4d060331b3acaf2"} Feb 03 12:21:10 crc kubenswrapper[4820]: I0203 12:21:10.127308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m" event={"ID":"67c9fe0e-5cc6-469b-90a0-11adfac994cc","Type":"ContainerStarted","Data":"23e9c843225258947483f0496f81bdbc0b02bcb3bf21854ef04b59e6042ee856"} Feb 03 12:21:10 crc kubenswrapper[4820]: I0203 12:21:10.128140 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jzgf" event={"ID":"c1ad6c2d-5ab9-4904-9426-00ebf486a90d","Type":"ContainerStarted","Data":"90a033057a224011679466a6af18f0da1a5bc8ea13aa36fdf7e3fbeb0e54f0f7"} Feb 03 12:21:10 crc kubenswrapper[4820]: I0203 12:21:10.129754 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv" event={"ID":"3202dd82-6cc2-478c-9eb1-7810a23ce4bb","Type":"ContainerStarted","Data":"4f84f6e5a4b6a2b71885d4c4e60f2bca112ff280d2c690fca9c12bbc627a5fe6"} Feb 03 12:21:10 crc kubenswrapper[4820]: I0203 12:21:10.298285 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gx6fv"] Feb 03 12:21:10 crc kubenswrapper[4820]: W0203 12:21:10.303713 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f0df377_6a2b_4270_974f_3d178cdc47d9.slice/crio-4081d387665dcf9e93b57252d5e89b3ffdfc3e20b6a2c8b2d3028a30b8ccd855 WatchSource:0}: Error finding container 4081d387665dcf9e93b57252d5e89b3ffdfc3e20b6a2c8b2d3028a30b8ccd855: Status 404 returned error can't find the container with id 4081d387665dcf9e93b57252d5e89b3ffdfc3e20b6a2c8b2d3028a30b8ccd855 Feb 03 12:21:11 crc kubenswrapper[4820]: I0203 12:21:11.135132 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" event={"ID":"4f0df377-6a2b-4270-974f-3d178cdc47d9","Type":"ContainerStarted","Data":"4081d387665dcf9e93b57252d5e89b3ffdfc3e20b6a2c8b2d3028a30b8ccd855"} Feb 03 12:21:11 crc kubenswrapper[4820]: I0203 12:21:11.135278 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p5bj4" podUID="29aaf84c-c42d-486d-ab0e-13b63f35dcca" containerName="registry-server" containerID="cri-o://9f680d46e8c25e1a5a916dff9913ac55b736d039d6dcf57f99867bcf4d049f24" gracePeriod=2 Feb 03 12:21:12 crc kubenswrapper[4820]: I0203 12:21:12.159115 4820 generic.go:334] "Generic (PLEG): container finished" podID="29aaf84c-c42d-486d-ab0e-13b63f35dcca" containerID="9f680d46e8c25e1a5a916dff9913ac55b736d039d6dcf57f99867bcf4d049f24" exitCode=0 Feb 03 12:21:12 crc kubenswrapper[4820]: I0203 12:21:12.159568 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p5bj4" event={"ID":"29aaf84c-c42d-486d-ab0e-13b63f35dcca","Type":"ContainerDied","Data":"9f680d46e8c25e1a5a916dff9913ac55b736d039d6dcf57f99867bcf4d049f24"} Feb 03 12:21:12 crc kubenswrapper[4820]: I0203 12:21:12.237704 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:21:12 crc kubenswrapper[4820]: I0203 12:21:12.357328 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29aaf84c-c42d-486d-ab0e-13b63f35dcca-utilities\") pod \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\" (UID: \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\") " Feb 03 12:21:12 crc kubenswrapper[4820]: I0203 12:21:12.357501 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29aaf84c-c42d-486d-ab0e-13b63f35dcca-catalog-content\") pod \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\" (UID: \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\") " Feb 03 12:21:12 crc kubenswrapper[4820]: I0203 12:21:12.357565 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccvg7\" (UniqueName: \"kubernetes.io/projected/29aaf84c-c42d-486d-ab0e-13b63f35dcca-kube-api-access-ccvg7\") pod \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\" (UID: \"29aaf84c-c42d-486d-ab0e-13b63f35dcca\") " Feb 03 12:21:12 crc kubenswrapper[4820]: I0203 12:21:12.370803 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29aaf84c-c42d-486d-ab0e-13b63f35dcca-utilities" (OuterVolumeSpecName: "utilities") pod "29aaf84c-c42d-486d-ab0e-13b63f35dcca" (UID: "29aaf84c-c42d-486d-ab0e-13b63f35dcca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:21:12 crc kubenswrapper[4820]: I0203 12:21:12.384849 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29aaf84c-c42d-486d-ab0e-13b63f35dcca-kube-api-access-ccvg7" (OuterVolumeSpecName: "kube-api-access-ccvg7") pod "29aaf84c-c42d-486d-ab0e-13b63f35dcca" (UID: "29aaf84c-c42d-486d-ab0e-13b63f35dcca"). InnerVolumeSpecName "kube-api-access-ccvg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:21:12 crc kubenswrapper[4820]: I0203 12:21:12.458763 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccvg7\" (UniqueName: \"kubernetes.io/projected/29aaf84c-c42d-486d-ab0e-13b63f35dcca-kube-api-access-ccvg7\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:12 crc kubenswrapper[4820]: I0203 12:21:12.458802 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29aaf84c-c42d-486d-ab0e-13b63f35dcca-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:12 crc kubenswrapper[4820]: I0203 12:21:12.547336 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29aaf84c-c42d-486d-ab0e-13b63f35dcca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29aaf84c-c42d-486d-ab0e-13b63f35dcca" (UID: "29aaf84c-c42d-486d-ab0e-13b63f35dcca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:21:12 crc kubenswrapper[4820]: I0203 12:21:12.559694 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29aaf84c-c42d-486d-ab0e-13b63f35dcca-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:13 crc kubenswrapper[4820]: I0203 12:21:13.187528 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p5bj4" event={"ID":"29aaf84c-c42d-486d-ab0e-13b63f35dcca","Type":"ContainerDied","Data":"2d42c956a0a33d77cd03d5f6dcfbe8ced582afa3b495411d3487fa549ff99860"} Feb 03 12:21:13 crc kubenswrapper[4820]: I0203 12:21:13.187584 4820 scope.go:117] "RemoveContainer" containerID="9f680d46e8c25e1a5a916dff9913ac55b736d039d6dcf57f99867bcf4d049f24" Feb 03 12:21:13 crc kubenswrapper[4820]: I0203 12:21:13.187727 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p5bj4" Feb 03 12:21:13 crc kubenswrapper[4820]: I0203 12:21:13.495581 4820 scope.go:117] "RemoveContainer" containerID="8f2ca81931e0fca1dc8deba34742ee4915e9ab12f59a7674db3a08dc7b5bc78f" Feb 03 12:21:13 crc kubenswrapper[4820]: I0203 12:21:13.614783 4820 scope.go:117] "RemoveContainer" containerID="76145b5ccbb69a3b6fb37ae1180fb80390f5820f567e8ba5b87e26724c3b35ea" Feb 03 12:21:13 crc kubenswrapper[4820]: I0203 12:21:13.646798 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p5bj4"] Feb 03 12:21:13 crc kubenswrapper[4820]: I0203 12:21:13.661177 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p5bj4"] Feb 03 12:21:15 crc kubenswrapper[4820]: I0203 12:21:15.167755 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29aaf84c-c42d-486d-ab0e-13b63f35dcca" path="/var/lib/kubelet/pods/29aaf84c-c42d-486d-ab0e-13b63f35dcca/volumes" Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.560466 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fpjzs"] Feb 03 12:21:16 crc kubenswrapper[4820]: E0203 12:21:16.560861 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29aaf84c-c42d-486d-ab0e-13b63f35dcca" containerName="extract-content" Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.560882 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="29aaf84c-c42d-486d-ab0e-13b63f35dcca" containerName="extract-content" Feb 03 12:21:16 crc kubenswrapper[4820]: E0203 12:21:16.560926 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29aaf84c-c42d-486d-ab0e-13b63f35dcca" containerName="extract-utilities" Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.560933 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="29aaf84c-c42d-486d-ab0e-13b63f35dcca" containerName="extract-utilities" Feb 03 12:21:16 crc kubenswrapper[4820]: E0203 12:21:16.560941 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29aaf84c-c42d-486d-ab0e-13b63f35dcca" containerName="registry-server" Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.560948 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="29aaf84c-c42d-486d-ab0e-13b63f35dcca" containerName="registry-server" Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.561091 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="29aaf84c-c42d-486d-ab0e-13b63f35dcca" containerName="registry-server" Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.562309 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.588309 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpjzs"] Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.671577 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa121a10-4a24-4b39-a08b-9c29374c3dab-utilities\") pod \"redhat-marketplace-fpjzs\" (UID: \"fa121a10-4a24-4b39-a08b-9c29374c3dab\") " pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.671634 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttd7c\" (UniqueName: \"kubernetes.io/projected/fa121a10-4a24-4b39-a08b-9c29374c3dab-kube-api-access-ttd7c\") pod \"redhat-marketplace-fpjzs\" (UID: \"fa121a10-4a24-4b39-a08b-9c29374c3dab\") " pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.671710 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa121a10-4a24-4b39-a08b-9c29374c3dab-catalog-content\") pod \"redhat-marketplace-fpjzs\" (UID: \"fa121a10-4a24-4b39-a08b-9c29374c3dab\") " pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.773235 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa121a10-4a24-4b39-a08b-9c29374c3dab-catalog-content\") pod \"redhat-marketplace-fpjzs\" (UID: \"fa121a10-4a24-4b39-a08b-9c29374c3dab\") " pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.773343 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa121a10-4a24-4b39-a08b-9c29374c3dab-utilities\") pod \"redhat-marketplace-fpjzs\" (UID: \"fa121a10-4a24-4b39-a08b-9c29374c3dab\") " pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.773372 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttd7c\" (UniqueName: \"kubernetes.io/projected/fa121a10-4a24-4b39-a08b-9c29374c3dab-kube-api-access-ttd7c\") pod \"redhat-marketplace-fpjzs\" (UID: \"fa121a10-4a24-4b39-a08b-9c29374c3dab\") " pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.774314 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa121a10-4a24-4b39-a08b-9c29374c3dab-catalog-content\") pod \"redhat-marketplace-fpjzs\" (UID: \"fa121a10-4a24-4b39-a08b-9c29374c3dab\") " pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:16 crc kubenswrapper[4820]: I0203 12:21:16.774575 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa121a10-4a24-4b39-a08b-9c29374c3dab-utilities\") pod \"redhat-marketplace-fpjzs\" (UID: \"fa121a10-4a24-4b39-a08b-9c29374c3dab\") " pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:17 crc kubenswrapper[4820]: I0203 12:21:17.018236 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttd7c\" (UniqueName: \"kubernetes.io/projected/fa121a10-4a24-4b39-a08b-9c29374c3dab-kube-api-access-ttd7c\") pod \"redhat-marketplace-fpjzs\" (UID: \"fa121a10-4a24-4b39-a08b-9c29374c3dab\") " pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:17 crc kubenswrapper[4820]: I0203 12:21:17.239630 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:17 crc kubenswrapper[4820]: I0203 12:21:17.884016 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpjzs"] Feb 03 12:21:17 crc kubenswrapper[4820]: W0203 12:21:17.897999 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa121a10_4a24_4b39_a08b_9c29374c3dab.slice/crio-eb18c6d24c0e4d2762ea5b8ad8bb3587fd831e3cd3f773f4616345302a248a53 WatchSource:0}: Error finding container eb18c6d24c0e4d2762ea5b8ad8bb3587fd831e3cd3f773f4616345302a248a53: Status 404 returned error can't find the container with id eb18c6d24c0e4d2762ea5b8ad8bb3587fd831e3cd3f773f4616345302a248a53 Feb 03 12:21:18 crc kubenswrapper[4820]: I0203 12:21:18.612976 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa121a10-4a24-4b39-a08b-9c29374c3dab" containerID="95ec5bd7063cf07b2703668552a45e103743ff7bfe7b9302a61ad3010dbeff4e" exitCode=0 Feb 03 12:21:18 crc kubenswrapper[4820]: I0203 12:21:18.613193 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpjzs" event={"ID":"fa121a10-4a24-4b39-a08b-9c29374c3dab","Type":"ContainerDied","Data":"95ec5bd7063cf07b2703668552a45e103743ff7bfe7b9302a61ad3010dbeff4e"} Feb 03 12:21:18 crc kubenswrapper[4820]: I0203 12:21:18.613263 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpjzs" event={"ID":"fa121a10-4a24-4b39-a08b-9c29374c3dab","Type":"ContainerStarted","Data":"eb18c6d24c0e4d2762ea5b8ad8bb3587fd831e3cd3f773f4616345302a248a53"} Feb 03 12:21:28 crc kubenswrapper[4820]: I0203 12:21:28.927055 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:21:28 crc kubenswrapper[4820]: I0203 12:21:28.929279 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:21:28 crc kubenswrapper[4820]: I0203 12:21:28.927152 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qr29p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.72:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:21:28 crc kubenswrapper[4820]: I0203 12:21:28.929879 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" podUID="5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.72:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:21:28 crc kubenswrapper[4820]: I0203 12:21:28.934782 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:21:28 crc kubenswrapper[4820]: I0203 12:21:28.934917 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:21:28 crc kubenswrapper[4820]: I0203 12:21:28.935327 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qr29p container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.72:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:21:28 crc kubenswrapper[4820]: I0203 12:21:28.935356 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-qr29p" podUID="5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.72:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:21:30 crc kubenswrapper[4820]: E0203 12:21:30.879618 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" Feb 03 12:21:30 crc kubenswrapper[4820]: E0203 12:21:30.879865 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v92dr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5bf474d74f-gx6fv_openshift-operators(4f0df377-6a2b-4270-974f-3d178cdc47d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:21:30 crc kubenswrapper[4820]: E0203 12:21:30.881070 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" podUID="4f0df377-6a2b-4270-974f-3d178cdc47d9" Feb 03 12:21:30 crc kubenswrapper[4820]: E0203 12:21:30.983144 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8\\\"\"" pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" podUID="4f0df377-6a2b-4270-974f-3d178cdc47d9" Feb 03 12:21:31 crc kubenswrapper[4820]: I0203 12:21:31.365347 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:21:31 crc kubenswrapper[4820]: I0203 12:21:31.365419 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:21:31 crc kubenswrapper[4820]: I0203 12:21:31.365483 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:21:31 crc kubenswrapper[4820]: I0203 12:21:31.366256 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f961bb48cccbb18f37545a37a50be08f55d027f113f203a762d6ed87bcedcb42"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:21:31 crc kubenswrapper[4820]: I0203 12:21:31.366338 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://f961bb48cccbb18f37545a37a50be08f55d027f113f203a762d6ed87bcedcb42" gracePeriod=600 Feb 03 12:21:32 crc kubenswrapper[4820]: I0203 12:21:32.047045 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="f961bb48cccbb18f37545a37a50be08f55d027f113f203a762d6ed87bcedcb42" exitCode=0 Feb 03 12:21:32 crc kubenswrapper[4820]: I0203 12:21:32.047127 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"f961bb48cccbb18f37545a37a50be08f55d027f113f203a762d6ed87bcedcb42"} Feb 03 12:21:32 crc kubenswrapper[4820]: I0203 12:21:32.047437 4820 scope.go:117] "RemoveContainer" containerID="6776805bdea74d9ea3fdad5be16a8319bda906899f9d28fa7cc0a1b3ab400cbf" Feb 03 12:21:35 crc kubenswrapper[4820]: E0203 12:21:35.771131 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" Feb 03 12:21:35 crc kubenswrapper[4820]: E0203 12:21:35.771829 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c,Command:[],Args:[--namespace=$(NAMESPACE) --images=perses=$(RELATED_IMAGE_PERSES) --images=alertmanager=$(RELATED_IMAGE_ALERTMANAGER) --images=prometheus=$(RELATED_IMAGE_PROMETHEUS) --images=thanos=$(RELATED_IMAGE_THANOS) --images=ui-dashboards=$(RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN) --images=ui-distributed-tracing=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN) --images=ui-distributed-tracing-pf5=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5) --images=ui-distributed-tracing-pf4=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4) --images=ui-logging=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN) --images=ui-logging-pf4=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4) --images=ui-troubleshooting-panel=$(RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN) --images=ui-monitoring=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN) --images=ui-monitoring-pf5=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5) --images=korrel8r=$(RELATED_IMAGE_KORREL8R) --images=health-analyzer=$(RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER) --openshift.enabled=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER,Value:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS,Value:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_THANOS,Value:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PERSES,Value:registry.redhat.io/cluster-observability-operator/perses-rhel9@sha256:e797cdb47beef40b04da7b6d645bca3dc32e6247003c45b56b38efd9e13bf01c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-rhel9@sha256:7d662a120305e2528acc7e9142b770b5b6a7f4932ddfcadfa4ac953935124895,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf5-rhel9@sha256:75465aabb0aa427a5c531a8fcde463f6d119afbcc618ebcbf6b7ee9bc8aad160,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf4-rhel9@sha256:dc18c8d6a4a9a0a574a57cc5082c8a9b26023bd6d69b9732892d584c1dfe5070,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-rhel9@sha256:369729978cecdc13c99ef3d179f8eb8a450a4a0cb70b63c27a55a15d1710ba27,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-pf4-rhel9@sha256:d8c7a61d147f62b204d5c5f16864386025393453c9a81ea327bbd25d7765d611,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/troubleshooting-panel-console-plugin-rhel9@sha256:b4a6eb1cc118a4334b424614959d8b7f361ddd779b3a72690ca49b0a3f26d9b8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-rhel9@sha256:21d4fff670893ba4b7fbc528cd49f8b71c8281cede9ef84f0697065bb6a7fc50,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-pf5-rhel9@sha256:12d9dbe297a1c3b9df671f21156992082bc483887d851fafe76e5d17321ff474,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KORREL8R,Value:registry.redhat.io/cluster-observability-operator/korrel8r-rhel9@sha256:e65c37f04f6d76a0cbfe05edb3cddf6a8f14f859ee35cf3aebea8fcb991d2c19,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER,Value:registry.redhat.io/cluster-observability-operator/cluster-health-analyzer-rhel9@sha256:48e4e178c6eeaa9d5dd77a591c185a311b4b4a5caadb7199d48463123e31dc9e,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{400 -3} {} 400m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:observability-operator-tls,ReadOnly:true,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxdbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-operator-59bdc8b94-lshn6_openshift-operators(c22a4473-b3ac-4b33-9a20-320b76c330ab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:21:35 crc kubenswrapper[4820]: E0203 12:21:35.773121 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-operator-59bdc8b94-lshn6" podUID="c22a4473-b3ac-4b33-9a20-320b76c330ab" Feb 03 12:21:36 crc kubenswrapper[4820]: I0203 12:21:36.207321 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m" event={"ID":"67c9fe0e-5cc6-469b-90a0-11adfac994cc","Type":"ContainerStarted","Data":"45f0beb540004499806902d4978bd17bf69ba8015f7f3ab4ca3d9efb6bdd7448"} Feb 03 12:21:36 crc kubenswrapper[4820]: I0203 12:21:36.209412 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jzgf" event={"ID":"c1ad6c2d-5ab9-4904-9426-00ebf486a90d","Type":"ContainerStarted","Data":"fb3e740e786b8966319c69cf85fcb6426ac36b8a0809f91f65fb79b3d6f2d421"} Feb 03 12:21:36 crc kubenswrapper[4820]: I0203 12:21:36.211298 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv" event={"ID":"3202dd82-6cc2-478c-9eb1-7810a23ce4bb","Type":"ContainerStarted","Data":"03e97c66dac9c22f0ec657ce82b3ea03070b70c4624276397ff7636a2d313804"} Feb 03 12:21:36 crc kubenswrapper[4820]: I0203 12:21:36.215991 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"18df95791bb9a7f437d7d4ad2b5b03a9b5d2686ac3fa57d763f146b5d1397b25"} Feb 03 12:21:36 crc kubenswrapper[4820]: E0203 12:21:36.216399 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c\\\"\"" pod="openshift-operators/observability-operator-59bdc8b94-lshn6" podUID="c22a4473-b3ac-4b33-9a20-320b76c330ab" Feb 03 12:21:36 crc kubenswrapper[4820]: I0203 12:21:36.231580 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m" podStartSLOduration=3.196607403 podStartE2EDuration="29.231561028s" podCreationTimestamp="2026-02-03 12:21:07 +0000 UTC" firstStartedPulling="2026-02-03 12:21:09.735647998 +0000 UTC m=+987.258723862" lastFinishedPulling="2026-02-03 12:21:35.770601623 +0000 UTC m=+1013.293677487" observedRunningTime="2026-02-03 12:21:36.227970033 +0000 UTC m=+1013.751045907" watchObservedRunningTime="2026-02-03 12:21:36.231561028 +0000 UTC m=+1013.754636892" Feb 03 12:21:36 crc kubenswrapper[4820]: I0203 12:21:36.279028 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv" podStartSLOduration=3.476323967 podStartE2EDuration="29.279008373s" podCreationTimestamp="2026-02-03 12:21:07 +0000 UTC" firstStartedPulling="2026-02-03 12:21:09.992315318 +0000 UTC m=+987.515391182" lastFinishedPulling="2026-02-03 12:21:35.794999724 +0000 UTC m=+1013.318075588" observedRunningTime="2026-02-03 12:21:36.274443102 +0000 UTC m=+1013.797518976" watchObservedRunningTime="2026-02-03 12:21:36.279008373 +0000 UTC m=+1013.802084237" Feb 03 12:21:36 crc kubenswrapper[4820]: I0203 12:21:36.299361 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jzgf" podStartSLOduration=3.165115713 podStartE2EDuration="29.299341895s" podCreationTimestamp="2026-02-03 12:21:07 +0000 UTC" firstStartedPulling="2026-02-03 12:21:09.638408656 +0000 UTC m=+987.161484520" lastFinishedPulling="2026-02-03 12:21:35.772634838 +0000 UTC m=+1013.295710702" observedRunningTime="2026-02-03 12:21:36.294097445 +0000 UTC m=+1013.817173309" watchObservedRunningTime="2026-02-03 12:21:36.299341895 +0000 UTC m=+1013.822417749" Feb 03 12:21:37 crc kubenswrapper[4820]: I0203 12:21:37.222488 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa121a10-4a24-4b39-a08b-9c29374c3dab" containerID="2d3f8150a94786c7b10a3818c40efce994e95d0243cab739746cbf27c3b60900" exitCode=0 Feb 03 12:21:37 crc kubenswrapper[4820]: I0203 12:21:37.222551 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpjzs" event={"ID":"fa121a10-4a24-4b39-a08b-9c29374c3dab","Type":"ContainerDied","Data":"2d3f8150a94786c7b10a3818c40efce994e95d0243cab739746cbf27c3b60900"} Feb 03 12:21:38 crc kubenswrapper[4820]: I0203 12:21:38.232086 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpjzs" event={"ID":"fa121a10-4a24-4b39-a08b-9c29374c3dab","Type":"ContainerStarted","Data":"c820c204decc1bf5233b0c90c174eb4cc9f1fc16ffda3e11705268744af757ed"} Feb 03 12:21:38 crc kubenswrapper[4820]: I0203 12:21:38.258939 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fpjzs" podStartSLOduration=6.48404762 podStartE2EDuration="22.25891868s" podCreationTimestamp="2026-02-03 12:21:16 +0000 UTC" firstStartedPulling="2026-02-03 12:21:21.874740961 +0000 UTC m=+999.397816825" lastFinishedPulling="2026-02-03 12:21:37.649612031 +0000 UTC m=+1015.172687885" observedRunningTime="2026-02-03 12:21:38.250204038 +0000 UTC m=+1015.773279912" watchObservedRunningTime="2026-02-03 12:21:38.25891868 +0000 UTC m=+1015.781994564" Feb 03 12:21:41 crc kubenswrapper[4820]: I0203 12:21:41.955488 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mbr9t"] Feb 03 12:21:41 crc kubenswrapper[4820]: I0203 12:21:41.958630 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:41 crc kubenswrapper[4820]: I0203 12:21:41.970910 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mbr9t"] Feb 03 12:21:42 crc kubenswrapper[4820]: I0203 12:21:42.135571 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42463ed4-34a9-4219-af54-a17ef5f5e788-utilities\") pod \"certified-operators-mbr9t\" (UID: \"42463ed4-34a9-4219-af54-a17ef5f5e788\") " pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:42 crc kubenswrapper[4820]: I0203 12:21:42.135644 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42463ed4-34a9-4219-af54-a17ef5f5e788-catalog-content\") pod \"certified-operators-mbr9t\" (UID: \"42463ed4-34a9-4219-af54-a17ef5f5e788\") " pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:42 crc kubenswrapper[4820]: I0203 12:21:42.135692 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt849\" (UniqueName: \"kubernetes.io/projected/42463ed4-34a9-4219-af54-a17ef5f5e788-kube-api-access-lt849\") pod \"certified-operators-mbr9t\" (UID: \"42463ed4-34a9-4219-af54-a17ef5f5e788\") " pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:42 crc kubenswrapper[4820]: I0203 12:21:42.237475 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42463ed4-34a9-4219-af54-a17ef5f5e788-utilities\") pod \"certified-operators-mbr9t\" (UID: \"42463ed4-34a9-4219-af54-a17ef5f5e788\") " pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:42 crc kubenswrapper[4820]: I0203 12:21:42.237857 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42463ed4-34a9-4219-af54-a17ef5f5e788-catalog-content\") pod \"certified-operators-mbr9t\" (UID: \"42463ed4-34a9-4219-af54-a17ef5f5e788\") " pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:42 crc kubenswrapper[4820]: I0203 12:21:42.238374 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt849\" (UniqueName: \"kubernetes.io/projected/42463ed4-34a9-4219-af54-a17ef5f5e788-kube-api-access-lt849\") pod \"certified-operators-mbr9t\" (UID: \"42463ed4-34a9-4219-af54-a17ef5f5e788\") " pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:42 crc kubenswrapper[4820]: I0203 12:21:42.238317 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42463ed4-34a9-4219-af54-a17ef5f5e788-catalog-content\") pod \"certified-operators-mbr9t\" (UID: \"42463ed4-34a9-4219-af54-a17ef5f5e788\") " pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:42 crc kubenswrapper[4820]: I0203 12:21:42.238221 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42463ed4-34a9-4219-af54-a17ef5f5e788-utilities\") pod \"certified-operators-mbr9t\" (UID: \"42463ed4-34a9-4219-af54-a17ef5f5e788\") " pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:42 crc kubenswrapper[4820]: I0203 12:21:42.264949 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt849\" (UniqueName: \"kubernetes.io/projected/42463ed4-34a9-4219-af54-a17ef5f5e788-kube-api-access-lt849\") pod \"certified-operators-mbr9t\" (UID: \"42463ed4-34a9-4219-af54-a17ef5f5e788\") " pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:42 crc kubenswrapper[4820]: I0203 12:21:42.287530 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:42 crc kubenswrapper[4820]: I0203 12:21:42.793701 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mbr9t"] Feb 03 12:21:43 crc kubenswrapper[4820]: I0203 12:21:43.260260 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" event={"ID":"4f0df377-6a2b-4270-974f-3d178cdc47d9","Type":"ContainerStarted","Data":"c487526b777cec209f78c33bfb24703b2a82bfe30d8e0de1bd075244b9af9299"} Feb 03 12:21:43 crc kubenswrapper[4820]: I0203 12:21:43.260496 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" Feb 03 12:21:43 crc kubenswrapper[4820]: I0203 12:21:43.262978 4820 generic.go:334] "Generic (PLEG): container finished" podID="42463ed4-34a9-4219-af54-a17ef5f5e788" containerID="734b6eb3013024223cb59e1e7d3fb6df39dcfa40bebd10ac512fd6d56182d9a0" exitCode=0 Feb 03 12:21:43 crc kubenswrapper[4820]: I0203 12:21:43.263026 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbr9t" event={"ID":"42463ed4-34a9-4219-af54-a17ef5f5e788","Type":"ContainerDied","Data":"734b6eb3013024223cb59e1e7d3fb6df39dcfa40bebd10ac512fd6d56182d9a0"} Feb 03 12:21:43 crc kubenswrapper[4820]: I0203 12:21:43.263051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbr9t" event={"ID":"42463ed4-34a9-4219-af54-a17ef5f5e788","Type":"ContainerStarted","Data":"6c000f3fcd3aa0138b2336eb12b558de2c12fcf7aeac882c0104c349a41214bf"} Feb 03 12:21:43 crc kubenswrapper[4820]: I0203 12:21:43.288491 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" podStartSLOduration=2.549751433 podStartE2EDuration="35.288470035s" podCreationTimestamp="2026-02-03 12:21:08 +0000 UTC" firstStartedPulling="2026-02-03 12:21:10.311934265 +0000 UTC m=+987.835010129" lastFinishedPulling="2026-02-03 12:21:43.050652867 +0000 UTC m=+1020.573728731" observedRunningTime="2026-02-03 12:21:43.282311721 +0000 UTC m=+1020.805387595" watchObservedRunningTime="2026-02-03 12:21:43.288470035 +0000 UTC m=+1020.811545899" Feb 03 12:21:44 crc kubenswrapper[4820]: I0203 12:21:44.272502 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbr9t" event={"ID":"42463ed4-34a9-4219-af54-a17ef5f5e788","Type":"ContainerStarted","Data":"9a5658d0482aa59f70005b094eda0ce25b97fecc2651e86df520d24008901ce1"} Feb 03 12:21:45 crc kubenswrapper[4820]: I0203 12:21:45.280326 4820 generic.go:334] "Generic (PLEG): container finished" podID="42463ed4-34a9-4219-af54-a17ef5f5e788" containerID="9a5658d0482aa59f70005b094eda0ce25b97fecc2651e86df520d24008901ce1" exitCode=0 Feb 03 12:21:45 crc kubenswrapper[4820]: I0203 12:21:45.280607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbr9t" event={"ID":"42463ed4-34a9-4219-af54-a17ef5f5e788","Type":"ContainerDied","Data":"9a5658d0482aa59f70005b094eda0ce25b97fecc2651e86df520d24008901ce1"} Feb 03 12:21:46 crc kubenswrapper[4820]: I0203 12:21:46.288963 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbr9t" event={"ID":"42463ed4-34a9-4219-af54-a17ef5f5e788","Type":"ContainerStarted","Data":"80cb36695ad51ea96833e4e89a00447dfed210c5689a3f0518dca25410645f8e"} Feb 03 12:21:46 crc kubenswrapper[4820]: I0203 12:21:46.309405 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mbr9t" podStartSLOduration=2.6636788940000002 podStartE2EDuration="5.309389476s" podCreationTimestamp="2026-02-03 12:21:41 +0000 UTC" firstStartedPulling="2026-02-03 12:21:43.264162787 +0000 UTC m=+1020.787238661" lastFinishedPulling="2026-02-03 12:21:45.909873379 +0000 UTC m=+1023.432949243" observedRunningTime="2026-02-03 12:21:46.308645427 +0000 UTC m=+1023.831721291" watchObservedRunningTime="2026-02-03 12:21:46.309389476 +0000 UTC m=+1023.832465340" Feb 03 12:21:46 crc kubenswrapper[4820]: I0203 12:21:46.749340 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5kkw4"] Feb 03 12:21:46 crc kubenswrapper[4820]: I0203 12:21:46.750765 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:21:46 crc kubenswrapper[4820]: I0203 12:21:46.765104 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kkw4"] Feb 03 12:21:46 crc kubenswrapper[4820]: I0203 12:21:46.798051 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-276tv\" (UniqueName: \"kubernetes.io/projected/3f44091b-5eff-4787-88ca-5fe14742234e-kube-api-access-276tv\") pod \"community-operators-5kkw4\" (UID: \"3f44091b-5eff-4787-88ca-5fe14742234e\") " pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:21:46 crc kubenswrapper[4820]: I0203 12:21:46.798151 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f44091b-5eff-4787-88ca-5fe14742234e-utilities\") pod \"community-operators-5kkw4\" (UID: \"3f44091b-5eff-4787-88ca-5fe14742234e\") " pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:21:46 crc kubenswrapper[4820]: I0203 12:21:46.798250 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f44091b-5eff-4787-88ca-5fe14742234e-catalog-content\") pod \"community-operators-5kkw4\" (UID: \"3f44091b-5eff-4787-88ca-5fe14742234e\") " pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:21:46 crc kubenswrapper[4820]: I0203 12:21:46.899111 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f44091b-5eff-4787-88ca-5fe14742234e-catalog-content\") pod \"community-operators-5kkw4\" (UID: \"3f44091b-5eff-4787-88ca-5fe14742234e\") " pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:21:46 crc kubenswrapper[4820]: I0203 12:21:46.899210 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-276tv\" (UniqueName: \"kubernetes.io/projected/3f44091b-5eff-4787-88ca-5fe14742234e-kube-api-access-276tv\") pod \"community-operators-5kkw4\" (UID: \"3f44091b-5eff-4787-88ca-5fe14742234e\") " pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:21:46 crc kubenswrapper[4820]: I0203 12:21:46.899252 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f44091b-5eff-4787-88ca-5fe14742234e-utilities\") pod \"community-operators-5kkw4\" (UID: \"3f44091b-5eff-4787-88ca-5fe14742234e\") " pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:21:46 crc kubenswrapper[4820]: I0203 12:21:46.899776 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f44091b-5eff-4787-88ca-5fe14742234e-catalog-content\") pod \"community-operators-5kkw4\" (UID: \"3f44091b-5eff-4787-88ca-5fe14742234e\") " pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:21:46 crc kubenswrapper[4820]: I0203 12:21:46.899783 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f44091b-5eff-4787-88ca-5fe14742234e-utilities\") pod \"community-operators-5kkw4\" (UID: \"3f44091b-5eff-4787-88ca-5fe14742234e\") " pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:21:46 crc kubenswrapper[4820]: I0203 12:21:46.921188 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-276tv\" (UniqueName: \"kubernetes.io/projected/3f44091b-5eff-4787-88ca-5fe14742234e-kube-api-access-276tv\") pod \"community-operators-5kkw4\" (UID: \"3f44091b-5eff-4787-88ca-5fe14742234e\") " pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:21:47 crc kubenswrapper[4820]: I0203 12:21:47.066523 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:21:47 crc kubenswrapper[4820]: I0203 12:21:47.242653 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:47 crc kubenswrapper[4820]: I0203 12:21:47.243293 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:47 crc kubenswrapper[4820]: I0203 12:21:47.329328 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:47 crc kubenswrapper[4820]: I0203 12:21:47.514762 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:47 crc kubenswrapper[4820]: I0203 12:21:47.603372 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kkw4"] Feb 03 12:21:48 crc kubenswrapper[4820]: I0203 12:21:48.312846 4820 generic.go:334] "Generic (PLEG): container finished" podID="3f44091b-5eff-4787-88ca-5fe14742234e" containerID="4c88200867b5bdb7f32fbf9deaa067cbda7743170d5c356849c4283845e48882" exitCode=0 Feb 03 12:21:48 crc kubenswrapper[4820]: I0203 12:21:48.312914 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kkw4" event={"ID":"3f44091b-5eff-4787-88ca-5fe14742234e","Type":"ContainerDied","Data":"4c88200867b5bdb7f32fbf9deaa067cbda7743170d5c356849c4283845e48882"} Feb 03 12:21:48 crc kubenswrapper[4820]: I0203 12:21:48.313429 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kkw4" event={"ID":"3f44091b-5eff-4787-88ca-5fe14742234e","Type":"ContainerStarted","Data":"1ad092eaa77575b11d178327ec9e8a220b46a8704f273c97c7aa3e59fead9d01"} Feb 03 12:21:49 crc kubenswrapper[4820]: I0203 12:21:49.136267 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpjzs"] Feb 03 12:21:49 crc kubenswrapper[4820]: I0203 12:21:49.321660 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-lshn6" event={"ID":"c22a4473-b3ac-4b33-9a20-320b76c330ab","Type":"ContainerStarted","Data":"070655001bd13bcbfeba3a48d863f5e835b268b350a8a48909e9b475b8ddba43"} Feb 03 12:21:49 crc kubenswrapper[4820]: I0203 12:21:49.321763 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fpjzs" podUID="fa121a10-4a24-4b39-a08b-9c29374c3dab" containerName="registry-server" containerID="cri-o://c820c204decc1bf5233b0c90c174eb4cc9f1fc16ffda3e11705268744af757ed" gracePeriod=2 Feb 03 12:21:49 crc kubenswrapper[4820]: I0203 12:21:49.322136 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-lshn6" Feb 03 12:21:49 crc kubenswrapper[4820]: I0203 12:21:49.341619 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-lshn6" Feb 03 12:21:49 crc kubenswrapper[4820]: I0203 12:21:49.355589 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-lshn6" podStartSLOduration=2.960944963 podStartE2EDuration="41.35557s" podCreationTimestamp="2026-02-03 12:21:08 +0000 UTC" firstStartedPulling="2026-02-03 12:21:10.017819097 +0000 UTC m=+987.540894961" lastFinishedPulling="2026-02-03 12:21:48.412444134 +0000 UTC m=+1025.935519998" observedRunningTime="2026-02-03 12:21:49.354867941 +0000 UTC m=+1026.877943825" watchObservedRunningTime="2026-02-03 12:21:49.35557 +0000 UTC m=+1026.878645864" Feb 03 12:21:49 crc kubenswrapper[4820]: I0203 12:21:49.554772 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-gx6fv" Feb 03 12:21:49 crc kubenswrapper[4820]: I0203 12:21:49.793853 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:49 crc kubenswrapper[4820]: I0203 12:21:49.902834 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa121a10-4a24-4b39-a08b-9c29374c3dab-utilities\") pod \"fa121a10-4a24-4b39-a08b-9c29374c3dab\" (UID: \"fa121a10-4a24-4b39-a08b-9c29374c3dab\") " Feb 03 12:21:49 crc kubenswrapper[4820]: I0203 12:21:49.903026 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttd7c\" (UniqueName: \"kubernetes.io/projected/fa121a10-4a24-4b39-a08b-9c29374c3dab-kube-api-access-ttd7c\") pod \"fa121a10-4a24-4b39-a08b-9c29374c3dab\" (UID: \"fa121a10-4a24-4b39-a08b-9c29374c3dab\") " Feb 03 12:21:49 crc kubenswrapper[4820]: I0203 12:21:49.903070 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa121a10-4a24-4b39-a08b-9c29374c3dab-catalog-content\") pod \"fa121a10-4a24-4b39-a08b-9c29374c3dab\" (UID: \"fa121a10-4a24-4b39-a08b-9c29374c3dab\") " Feb 03 12:21:49 crc kubenswrapper[4820]: I0203 12:21:49.903600 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa121a10-4a24-4b39-a08b-9c29374c3dab-utilities" (OuterVolumeSpecName: "utilities") pod "fa121a10-4a24-4b39-a08b-9c29374c3dab" (UID: "fa121a10-4a24-4b39-a08b-9c29374c3dab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:21:49 crc kubenswrapper[4820]: I0203 12:21:49.915113 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa121a10-4a24-4b39-a08b-9c29374c3dab-kube-api-access-ttd7c" (OuterVolumeSpecName: "kube-api-access-ttd7c") pod "fa121a10-4a24-4b39-a08b-9c29374c3dab" (UID: "fa121a10-4a24-4b39-a08b-9c29374c3dab"). InnerVolumeSpecName "kube-api-access-ttd7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:21:49 crc kubenswrapper[4820]: I0203 12:21:49.930613 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa121a10-4a24-4b39-a08b-9c29374c3dab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa121a10-4a24-4b39-a08b-9c29374c3dab" (UID: "fa121a10-4a24-4b39-a08b-9c29374c3dab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.004792 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa121a10-4a24-4b39-a08b-9c29374c3dab-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.004848 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttd7c\" (UniqueName: \"kubernetes.io/projected/fa121a10-4a24-4b39-a08b-9c29374c3dab-kube-api-access-ttd7c\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.004863 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa121a10-4a24-4b39-a08b-9c29374c3dab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.329833 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa121a10-4a24-4b39-a08b-9c29374c3dab" containerID="c820c204decc1bf5233b0c90c174eb4cc9f1fc16ffda3e11705268744af757ed" exitCode=0 Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.329924 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpjzs" Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.329948 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpjzs" event={"ID":"fa121a10-4a24-4b39-a08b-9c29374c3dab","Type":"ContainerDied","Data":"c820c204decc1bf5233b0c90c174eb4cc9f1fc16ffda3e11705268744af757ed"} Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.329983 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpjzs" event={"ID":"fa121a10-4a24-4b39-a08b-9c29374c3dab","Type":"ContainerDied","Data":"eb18c6d24c0e4d2762ea5b8ad8bb3587fd831e3cd3f773f4616345302a248a53"} Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.330005 4820 scope.go:117] "RemoveContainer" containerID="c820c204decc1bf5233b0c90c174eb4cc9f1fc16ffda3e11705268744af757ed" Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.335188 4820 generic.go:334] "Generic (PLEG): container finished" podID="3f44091b-5eff-4787-88ca-5fe14742234e" containerID="1bb9a0df538f632ccd6d46e0488eb848b8efe71b2985067aa6596fb5a9ee7a46" exitCode=0 Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.335476 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kkw4" event={"ID":"3f44091b-5eff-4787-88ca-5fe14742234e","Type":"ContainerDied","Data":"1bb9a0df538f632ccd6d46e0488eb848b8efe71b2985067aa6596fb5a9ee7a46"} Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.369841 4820 scope.go:117] "RemoveContainer" containerID="2d3f8150a94786c7b10a3818c40efce994e95d0243cab739746cbf27c3b60900" Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.381344 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpjzs"] Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.386171 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpjzs"] Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.395092 4820 scope.go:117] "RemoveContainer" containerID="95ec5bd7063cf07b2703668552a45e103743ff7bfe7b9302a61ad3010dbeff4e" Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.417098 4820 scope.go:117] "RemoveContainer" containerID="c820c204decc1bf5233b0c90c174eb4cc9f1fc16ffda3e11705268744af757ed" Feb 03 12:21:50 crc kubenswrapper[4820]: E0203 12:21:50.417600 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c820c204decc1bf5233b0c90c174eb4cc9f1fc16ffda3e11705268744af757ed\": container with ID starting with c820c204decc1bf5233b0c90c174eb4cc9f1fc16ffda3e11705268744af757ed not found: ID does not exist" containerID="c820c204decc1bf5233b0c90c174eb4cc9f1fc16ffda3e11705268744af757ed" Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.417649 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c820c204decc1bf5233b0c90c174eb4cc9f1fc16ffda3e11705268744af757ed"} err="failed to get container status \"c820c204decc1bf5233b0c90c174eb4cc9f1fc16ffda3e11705268744af757ed\": rpc error: code = NotFound desc = could not find container \"c820c204decc1bf5233b0c90c174eb4cc9f1fc16ffda3e11705268744af757ed\": container with ID starting with c820c204decc1bf5233b0c90c174eb4cc9f1fc16ffda3e11705268744af757ed not found: ID does not exist" Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.417677 4820 scope.go:117] "RemoveContainer" containerID="2d3f8150a94786c7b10a3818c40efce994e95d0243cab739746cbf27c3b60900" Feb 03 12:21:50 crc kubenswrapper[4820]: E0203 12:21:50.418296 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d3f8150a94786c7b10a3818c40efce994e95d0243cab739746cbf27c3b60900\": container with ID starting with 2d3f8150a94786c7b10a3818c40efce994e95d0243cab739746cbf27c3b60900 not found: ID does not exist" containerID="2d3f8150a94786c7b10a3818c40efce994e95d0243cab739746cbf27c3b60900" Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.418329 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3f8150a94786c7b10a3818c40efce994e95d0243cab739746cbf27c3b60900"} err="failed to get container status \"2d3f8150a94786c7b10a3818c40efce994e95d0243cab739746cbf27c3b60900\": rpc error: code = NotFound desc = could not find container \"2d3f8150a94786c7b10a3818c40efce994e95d0243cab739746cbf27c3b60900\": container with ID starting with 2d3f8150a94786c7b10a3818c40efce994e95d0243cab739746cbf27c3b60900 not found: ID does not exist" Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.418347 4820 scope.go:117] "RemoveContainer" containerID="95ec5bd7063cf07b2703668552a45e103743ff7bfe7b9302a61ad3010dbeff4e" Feb 03 12:21:50 crc kubenswrapper[4820]: E0203 12:21:50.418821 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95ec5bd7063cf07b2703668552a45e103743ff7bfe7b9302a61ad3010dbeff4e\": container with ID starting with 95ec5bd7063cf07b2703668552a45e103743ff7bfe7b9302a61ad3010dbeff4e not found: ID does not exist" containerID="95ec5bd7063cf07b2703668552a45e103743ff7bfe7b9302a61ad3010dbeff4e" Feb 03 12:21:50 crc kubenswrapper[4820]: I0203 12:21:50.418849 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95ec5bd7063cf07b2703668552a45e103743ff7bfe7b9302a61ad3010dbeff4e"} err="failed to get container status \"95ec5bd7063cf07b2703668552a45e103743ff7bfe7b9302a61ad3010dbeff4e\": rpc error: code = NotFound desc = could not find container \"95ec5bd7063cf07b2703668552a45e103743ff7bfe7b9302a61ad3010dbeff4e\": container with ID starting with 95ec5bd7063cf07b2703668552a45e103743ff7bfe7b9302a61ad3010dbeff4e not found: ID does not exist" Feb 03 12:21:51 crc kubenswrapper[4820]: I0203 12:21:51.151643 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa121a10-4a24-4b39-a08b-9c29374c3dab" path="/var/lib/kubelet/pods/fa121a10-4a24-4b39-a08b-9c29374c3dab/volumes" Feb 03 12:21:52 crc kubenswrapper[4820]: I0203 12:21:52.288400 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:52 crc kubenswrapper[4820]: I0203 12:21:52.288455 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:52 crc kubenswrapper[4820]: I0203 12:21:52.369964 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:53 crc kubenswrapper[4820]: I0203 12:21:53.360832 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kkw4" event={"ID":"3f44091b-5eff-4787-88ca-5fe14742234e","Type":"ContainerStarted","Data":"66a0a6be5768eb9c47b97d7e8533bcc991afb439cc7bbb600c525d6c90e60dab"} Feb 03 12:21:53 crc kubenswrapper[4820]: I0203 12:21:53.396960 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5kkw4" podStartSLOduration=2.781795099 podStartE2EDuration="7.396930867s" podCreationTimestamp="2026-02-03 12:21:46 +0000 UTC" firstStartedPulling="2026-02-03 12:21:48.314348851 +0000 UTC m=+1025.837424705" lastFinishedPulling="2026-02-03 12:21:52.929484609 +0000 UTC m=+1030.452560473" observedRunningTime="2026-02-03 12:21:53.393866496 +0000 UTC m=+1030.916942380" watchObservedRunningTime="2026-02-03 12:21:53.396930867 +0000 UTC m=+1030.920006751" Feb 03 12:21:53 crc kubenswrapper[4820]: I0203 12:21:53.439566 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:55 crc kubenswrapper[4820]: I0203 12:21:55.536185 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mbr9t"] Feb 03 12:21:55 crc kubenswrapper[4820]: I0203 12:21:55.536672 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mbr9t" podUID="42463ed4-34a9-4219-af54-a17ef5f5e788" containerName="registry-server" containerID="cri-o://80cb36695ad51ea96833e4e89a00447dfed210c5689a3f0518dca25410645f8e" gracePeriod=2 Feb 03 12:21:56 crc kubenswrapper[4820]: I0203 12:21:56.379768 4820 generic.go:334] "Generic (PLEG): container finished" podID="42463ed4-34a9-4219-af54-a17ef5f5e788" containerID="80cb36695ad51ea96833e4e89a00447dfed210c5689a3f0518dca25410645f8e" exitCode=0 Feb 03 12:21:56 crc kubenswrapper[4820]: I0203 12:21:56.379842 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbr9t" event={"ID":"42463ed4-34a9-4219-af54-a17ef5f5e788","Type":"ContainerDied","Data":"80cb36695ad51ea96833e4e89a00447dfed210c5689a3f0518dca25410645f8e"} Feb 03 12:21:56 crc kubenswrapper[4820]: I0203 12:21:56.432366 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:56 crc kubenswrapper[4820]: I0203 12:21:56.540463 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt849\" (UniqueName: \"kubernetes.io/projected/42463ed4-34a9-4219-af54-a17ef5f5e788-kube-api-access-lt849\") pod \"42463ed4-34a9-4219-af54-a17ef5f5e788\" (UID: \"42463ed4-34a9-4219-af54-a17ef5f5e788\") " Feb 03 12:21:56 crc kubenswrapper[4820]: I0203 12:21:56.540565 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42463ed4-34a9-4219-af54-a17ef5f5e788-catalog-content\") pod \"42463ed4-34a9-4219-af54-a17ef5f5e788\" (UID: \"42463ed4-34a9-4219-af54-a17ef5f5e788\") " Feb 03 12:21:56 crc kubenswrapper[4820]: I0203 12:21:56.540617 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42463ed4-34a9-4219-af54-a17ef5f5e788-utilities\") pod \"42463ed4-34a9-4219-af54-a17ef5f5e788\" (UID: \"42463ed4-34a9-4219-af54-a17ef5f5e788\") " Feb 03 12:21:56 crc kubenswrapper[4820]: I0203 12:21:56.541406 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42463ed4-34a9-4219-af54-a17ef5f5e788-utilities" (OuterVolumeSpecName: "utilities") pod "42463ed4-34a9-4219-af54-a17ef5f5e788" (UID: "42463ed4-34a9-4219-af54-a17ef5f5e788"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:21:56 crc kubenswrapper[4820]: I0203 12:21:56.545834 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42463ed4-34a9-4219-af54-a17ef5f5e788-kube-api-access-lt849" (OuterVolumeSpecName: "kube-api-access-lt849") pod "42463ed4-34a9-4219-af54-a17ef5f5e788" (UID: "42463ed4-34a9-4219-af54-a17ef5f5e788"). InnerVolumeSpecName "kube-api-access-lt849". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:21:56 crc kubenswrapper[4820]: I0203 12:21:56.586355 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42463ed4-34a9-4219-af54-a17ef5f5e788-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42463ed4-34a9-4219-af54-a17ef5f5e788" (UID: "42463ed4-34a9-4219-af54-a17ef5f5e788"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:21:56 crc kubenswrapper[4820]: I0203 12:21:56.642113 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt849\" (UniqueName: \"kubernetes.io/projected/42463ed4-34a9-4219-af54-a17ef5f5e788-kube-api-access-lt849\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:56 crc kubenswrapper[4820]: I0203 12:21:56.642154 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42463ed4-34a9-4219-af54-a17ef5f5e788-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:56 crc kubenswrapper[4820]: I0203 12:21:56.642163 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42463ed4-34a9-4219-af54-a17ef5f5e788-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:57 crc kubenswrapper[4820]: I0203 12:21:57.067128 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:21:57 crc kubenswrapper[4820]: I0203 12:21:57.067431 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:21:57 crc kubenswrapper[4820]: I0203 12:21:57.116160 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:21:57 crc kubenswrapper[4820]: I0203 12:21:57.388705 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbr9t" Feb 03 12:21:57 crc kubenswrapper[4820]: I0203 12:21:57.389122 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbr9t" event={"ID":"42463ed4-34a9-4219-af54-a17ef5f5e788","Type":"ContainerDied","Data":"6c000f3fcd3aa0138b2336eb12b558de2c12fcf7aeac882c0104c349a41214bf"} Feb 03 12:21:57 crc kubenswrapper[4820]: I0203 12:21:57.389159 4820 scope.go:117] "RemoveContainer" containerID="80cb36695ad51ea96833e4e89a00447dfed210c5689a3f0518dca25410645f8e" Feb 03 12:21:57 crc kubenswrapper[4820]: I0203 12:21:57.410137 4820 scope.go:117] "RemoveContainer" containerID="9a5658d0482aa59f70005b094eda0ce25b97fecc2651e86df520d24008901ce1" Feb 03 12:21:57 crc kubenswrapper[4820]: I0203 12:21:57.412957 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mbr9t"] Feb 03 12:21:57 crc kubenswrapper[4820]: I0203 12:21:57.417061 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mbr9t"] Feb 03 12:21:57 crc kubenswrapper[4820]: I0203 12:21:57.438595 4820 scope.go:117] "RemoveContainer" containerID="734b6eb3013024223cb59e1e7d3fb6df39dcfa40bebd10ac512fd6d56182d9a0" Feb 03 12:21:59 crc kubenswrapper[4820]: I0203 12:21:59.151387 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42463ed4-34a9-4219-af54-a17ef5f5e788" path="/var/lib/kubelet/pods/42463ed4-34a9-4219-af54-a17ef5f5e788/volumes" Feb 03 12:22:07 crc kubenswrapper[4820]: I0203 12:22:07.125128 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:22:07 crc kubenswrapper[4820]: I0203 12:22:07.178010 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kkw4"] Feb 03 12:22:07 crc kubenswrapper[4820]: I0203 12:22:07.552209 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5kkw4" podUID="3f44091b-5eff-4787-88ca-5fe14742234e" containerName="registry-server" containerID="cri-o://66a0a6be5768eb9c47b97d7e8533bcc991afb439cc7bbb600c525d6c90e60dab" gracePeriod=2 Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.236748 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.258693 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-276tv\" (UniqueName: \"kubernetes.io/projected/3f44091b-5eff-4787-88ca-5fe14742234e-kube-api-access-276tv\") pod \"3f44091b-5eff-4787-88ca-5fe14742234e\" (UID: \"3f44091b-5eff-4787-88ca-5fe14742234e\") " Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.258766 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f44091b-5eff-4787-88ca-5fe14742234e-catalog-content\") pod \"3f44091b-5eff-4787-88ca-5fe14742234e\" (UID: \"3f44091b-5eff-4787-88ca-5fe14742234e\") " Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.258814 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f44091b-5eff-4787-88ca-5fe14742234e-utilities\") pod \"3f44091b-5eff-4787-88ca-5fe14742234e\" (UID: \"3f44091b-5eff-4787-88ca-5fe14742234e\") " Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.260042 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f44091b-5eff-4787-88ca-5fe14742234e-utilities" (OuterVolumeSpecName: "utilities") pod "3f44091b-5eff-4787-88ca-5fe14742234e" (UID: "3f44091b-5eff-4787-88ca-5fe14742234e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.266743 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f44091b-5eff-4787-88ca-5fe14742234e-kube-api-access-276tv" (OuterVolumeSpecName: "kube-api-access-276tv") pod "3f44091b-5eff-4787-88ca-5fe14742234e" (UID: "3f44091b-5eff-4787-88ca-5fe14742234e"). InnerVolumeSpecName "kube-api-access-276tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.318639 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f44091b-5eff-4787-88ca-5fe14742234e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f44091b-5eff-4787-88ca-5fe14742234e" (UID: "3f44091b-5eff-4787-88ca-5fe14742234e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.361136 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-276tv\" (UniqueName: \"kubernetes.io/projected/3f44091b-5eff-4787-88ca-5fe14742234e-kube-api-access-276tv\") on node \"crc\" DevicePath \"\"" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.361213 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f44091b-5eff-4787-88ca-5fe14742234e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.361232 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f44091b-5eff-4787-88ca-5fe14742234e-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.561870 4820 generic.go:334] "Generic (PLEG): container finished" podID="3f44091b-5eff-4787-88ca-5fe14742234e" containerID="66a0a6be5768eb9c47b97d7e8533bcc991afb439cc7bbb600c525d6c90e60dab" exitCode=0 Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.561930 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kkw4" event={"ID":"3f44091b-5eff-4787-88ca-5fe14742234e","Type":"ContainerDied","Data":"66a0a6be5768eb9c47b97d7e8533bcc991afb439cc7bbb600c525d6c90e60dab"} Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.561958 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kkw4" event={"ID":"3f44091b-5eff-4787-88ca-5fe14742234e","Type":"ContainerDied","Data":"1ad092eaa77575b11d178327ec9e8a220b46a8704f273c97c7aa3e59fead9d01"} Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.561975 4820 scope.go:117] "RemoveContainer" containerID="66a0a6be5768eb9c47b97d7e8533bcc991afb439cc7bbb600c525d6c90e60dab" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.562124 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kkw4" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.586412 4820 scope.go:117] "RemoveContainer" containerID="1bb9a0df538f632ccd6d46e0488eb848b8efe71b2985067aa6596fb5a9ee7a46" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.595705 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kkw4"] Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.602946 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5kkw4"] Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.613757 4820 scope.go:117] "RemoveContainer" containerID="4c88200867b5bdb7f32fbf9deaa067cbda7743170d5c356849c4283845e48882" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.629085 4820 scope.go:117] "RemoveContainer" containerID="66a0a6be5768eb9c47b97d7e8533bcc991afb439cc7bbb600c525d6c90e60dab" Feb 03 12:22:08 crc kubenswrapper[4820]: E0203 12:22:08.629642 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66a0a6be5768eb9c47b97d7e8533bcc991afb439cc7bbb600c525d6c90e60dab\": container with ID starting with 66a0a6be5768eb9c47b97d7e8533bcc991afb439cc7bbb600c525d6c90e60dab not found: ID does not exist" containerID="66a0a6be5768eb9c47b97d7e8533bcc991afb439cc7bbb600c525d6c90e60dab" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.629688 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66a0a6be5768eb9c47b97d7e8533bcc991afb439cc7bbb600c525d6c90e60dab"} err="failed to get container status \"66a0a6be5768eb9c47b97d7e8533bcc991afb439cc7bbb600c525d6c90e60dab\": rpc error: code = NotFound desc = could not find container \"66a0a6be5768eb9c47b97d7e8533bcc991afb439cc7bbb600c525d6c90e60dab\": container with ID starting with 66a0a6be5768eb9c47b97d7e8533bcc991afb439cc7bbb600c525d6c90e60dab not found: ID does not exist" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.629719 4820 scope.go:117] "RemoveContainer" containerID="1bb9a0df538f632ccd6d46e0488eb848b8efe71b2985067aa6596fb5a9ee7a46" Feb 03 12:22:08 crc kubenswrapper[4820]: E0203 12:22:08.630110 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb9a0df538f632ccd6d46e0488eb848b8efe71b2985067aa6596fb5a9ee7a46\": container with ID starting with 1bb9a0df538f632ccd6d46e0488eb848b8efe71b2985067aa6596fb5a9ee7a46 not found: ID does not exist" containerID="1bb9a0df538f632ccd6d46e0488eb848b8efe71b2985067aa6596fb5a9ee7a46" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.630149 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb9a0df538f632ccd6d46e0488eb848b8efe71b2985067aa6596fb5a9ee7a46"} err="failed to get container status \"1bb9a0df538f632ccd6d46e0488eb848b8efe71b2985067aa6596fb5a9ee7a46\": rpc error: code = NotFound desc = could not find container \"1bb9a0df538f632ccd6d46e0488eb848b8efe71b2985067aa6596fb5a9ee7a46\": container with ID starting with 1bb9a0df538f632ccd6d46e0488eb848b8efe71b2985067aa6596fb5a9ee7a46 not found: ID does not exist" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.630174 4820 scope.go:117] "RemoveContainer" containerID="4c88200867b5bdb7f32fbf9deaa067cbda7743170d5c356849c4283845e48882" Feb 03 12:22:08 crc kubenswrapper[4820]: E0203 12:22:08.630566 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c88200867b5bdb7f32fbf9deaa067cbda7743170d5c356849c4283845e48882\": container with ID starting with 4c88200867b5bdb7f32fbf9deaa067cbda7743170d5c356849c4283845e48882 not found: ID does not exist" containerID="4c88200867b5bdb7f32fbf9deaa067cbda7743170d5c356849c4283845e48882" Feb 03 12:22:08 crc kubenswrapper[4820]: I0203 12:22:08.630638 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c88200867b5bdb7f32fbf9deaa067cbda7743170d5c356849c4283845e48882"} err="failed to get container status \"4c88200867b5bdb7f32fbf9deaa067cbda7743170d5c356849c4283845e48882\": rpc error: code = NotFound desc = could not find container \"4c88200867b5bdb7f32fbf9deaa067cbda7743170d5c356849c4283845e48882\": container with ID starting with 4c88200867b5bdb7f32fbf9deaa067cbda7743170d5c356849c4283845e48882 not found: ID does not exist" Feb 03 12:22:09 crc kubenswrapper[4820]: I0203 12:22:09.150668 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f44091b-5eff-4787-88ca-5fe14742234e" path="/var/lib/kubelet/pods/3f44091b-5eff-4787-88ca-5fe14742234e/volumes" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.160237 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh"] Feb 03 12:22:11 crc kubenswrapper[4820]: E0203 12:22:11.160577 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f44091b-5eff-4787-88ca-5fe14742234e" containerName="registry-server" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.160592 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f44091b-5eff-4787-88ca-5fe14742234e" containerName="registry-server" Feb 03 12:22:11 crc kubenswrapper[4820]: E0203 12:22:11.160613 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42463ed4-34a9-4219-af54-a17ef5f5e788" containerName="extract-utilities" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.160620 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="42463ed4-34a9-4219-af54-a17ef5f5e788" containerName="extract-utilities" Feb 03 12:22:11 crc kubenswrapper[4820]: E0203 12:22:11.160638 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42463ed4-34a9-4219-af54-a17ef5f5e788" containerName="extract-content" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.160644 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="42463ed4-34a9-4219-af54-a17ef5f5e788" containerName="extract-content" Feb 03 12:22:11 crc kubenswrapper[4820]: E0203 12:22:11.160661 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f44091b-5eff-4787-88ca-5fe14742234e" containerName="extract-content" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.160668 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f44091b-5eff-4787-88ca-5fe14742234e" containerName="extract-content" Feb 03 12:22:11 crc kubenswrapper[4820]: E0203 12:22:11.160684 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa121a10-4a24-4b39-a08b-9c29374c3dab" containerName="extract-utilities" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.160693 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa121a10-4a24-4b39-a08b-9c29374c3dab" containerName="extract-utilities" Feb 03 12:22:11 crc kubenswrapper[4820]: E0203 12:22:11.160709 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f44091b-5eff-4787-88ca-5fe14742234e" containerName="extract-utilities" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.160720 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f44091b-5eff-4787-88ca-5fe14742234e" containerName="extract-utilities" Feb 03 12:22:11 crc kubenswrapper[4820]: E0203 12:22:11.160731 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42463ed4-34a9-4219-af54-a17ef5f5e788" containerName="registry-server" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.160742 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="42463ed4-34a9-4219-af54-a17ef5f5e788" containerName="registry-server" Feb 03 12:22:11 crc kubenswrapper[4820]: E0203 12:22:11.160756 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa121a10-4a24-4b39-a08b-9c29374c3dab" containerName="extract-content" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.160762 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa121a10-4a24-4b39-a08b-9c29374c3dab" containerName="extract-content" Feb 03 12:22:11 crc kubenswrapper[4820]: E0203 12:22:11.160779 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa121a10-4a24-4b39-a08b-9c29374c3dab" containerName="registry-server" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.160787 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa121a10-4a24-4b39-a08b-9c29374c3dab" containerName="registry-server" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.160955 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="42463ed4-34a9-4219-af54-a17ef5f5e788" containerName="registry-server" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.160985 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f44091b-5eff-4787-88ca-5fe14742234e" containerName="registry-server" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.161001 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa121a10-4a24-4b39-a08b-9c29374c3dab" containerName="registry-server" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.162096 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.166119 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.184405 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh"] Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.245760 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dpm9\" (UniqueName: \"kubernetes.io/projected/dd3be9c9-9970-4055-b150-fb5ad093ef1e-kube-api-access-5dpm9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh\" (UID: \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.246149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd3be9c9-9970-4055-b150-fb5ad093ef1e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh\" (UID: \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.246218 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd3be9c9-9970-4055-b150-fb5ad093ef1e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh\" (UID: \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.348793 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd3be9c9-9970-4055-b150-fb5ad093ef1e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh\" (UID: \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.348903 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd3be9c9-9970-4055-b150-fb5ad093ef1e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh\" (UID: \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.349013 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpm9\" (UniqueName: \"kubernetes.io/projected/dd3be9c9-9970-4055-b150-fb5ad093ef1e-kube-api-access-5dpm9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh\" (UID: \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.350130 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd3be9c9-9970-4055-b150-fb5ad093ef1e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh\" (UID: \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.350162 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd3be9c9-9970-4055-b150-fb5ad093ef1e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh\" (UID: \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.384770 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dpm9\" (UniqueName: \"kubernetes.io/projected/dd3be9c9-9970-4055-b150-fb5ad093ef1e-kube-api-access-5dpm9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh\" (UID: \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" Feb 03 12:22:11 crc kubenswrapper[4820]: I0203 12:22:11.488743 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" Feb 03 12:22:12 crc kubenswrapper[4820]: I0203 12:22:12.476580 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh"] Feb 03 12:22:12 crc kubenswrapper[4820]: I0203 12:22:12.600481 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" event={"ID":"dd3be9c9-9970-4055-b150-fb5ad093ef1e","Type":"ContainerStarted","Data":"ec73da1a8602936d7ca27e1a340c2afccbb871600a18241cc0d2c86442ed771d"} Feb 03 12:22:13 crc kubenswrapper[4820]: I0203 12:22:13.609369 4820 generic.go:334] "Generic (PLEG): container finished" podID="dd3be9c9-9970-4055-b150-fb5ad093ef1e" containerID="e64c4b9ef5067fd59dadd0f2cf45c474e698016d8f2aa070acbedf73edb55dc3" exitCode=0 Feb 03 12:22:13 crc kubenswrapper[4820]: I0203 12:22:13.609439 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" event={"ID":"dd3be9c9-9970-4055-b150-fb5ad093ef1e","Type":"ContainerDied","Data":"e64c4b9ef5067fd59dadd0f2cf45c474e698016d8f2aa070acbedf73edb55dc3"} Feb 03 12:22:16 crc kubenswrapper[4820]: I0203 12:22:16.629228 4820 generic.go:334] "Generic (PLEG): container finished" podID="dd3be9c9-9970-4055-b150-fb5ad093ef1e" containerID="d0299789b8aea253efa06e99b6e4e740f86fa0583f74b47559a558e5cbbdd99b" exitCode=0 Feb 03 12:22:16 crc kubenswrapper[4820]: I0203 12:22:16.629274 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" event={"ID":"dd3be9c9-9970-4055-b150-fb5ad093ef1e","Type":"ContainerDied","Data":"d0299789b8aea253efa06e99b6e4e740f86fa0583f74b47559a558e5cbbdd99b"} Feb 03 12:22:17 crc kubenswrapper[4820]: I0203 12:22:17.638847 4820 generic.go:334] "Generic (PLEG): container finished" podID="dd3be9c9-9970-4055-b150-fb5ad093ef1e" containerID="c87be60689ee767337b31eea66f51d793ef8d1f353030fdef3b6b25262238577" exitCode=0 Feb 03 12:22:17 crc kubenswrapper[4820]: I0203 12:22:17.638927 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" event={"ID":"dd3be9c9-9970-4055-b150-fb5ad093ef1e","Type":"ContainerDied","Data":"c87be60689ee767337b31eea66f51d793ef8d1f353030fdef3b6b25262238577"} Feb 03 12:22:19 crc kubenswrapper[4820]: I0203 12:22:19.002984 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" Feb 03 12:22:19 crc kubenswrapper[4820]: I0203 12:22:19.043210 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dpm9\" (UniqueName: \"kubernetes.io/projected/dd3be9c9-9970-4055-b150-fb5ad093ef1e-kube-api-access-5dpm9\") pod \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\" (UID: \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\") " Feb 03 12:22:19 crc kubenswrapper[4820]: I0203 12:22:19.043310 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd3be9c9-9970-4055-b150-fb5ad093ef1e-bundle\") pod \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\" (UID: \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\") " Feb 03 12:22:19 crc kubenswrapper[4820]: I0203 12:22:19.043343 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd3be9c9-9970-4055-b150-fb5ad093ef1e-util\") pod \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\" (UID: \"dd3be9c9-9970-4055-b150-fb5ad093ef1e\") " Feb 03 12:22:19 crc kubenswrapper[4820]: I0203 12:22:19.043994 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd3be9c9-9970-4055-b150-fb5ad093ef1e-bundle" (OuterVolumeSpecName: "bundle") pod "dd3be9c9-9970-4055-b150-fb5ad093ef1e" (UID: "dd3be9c9-9970-4055-b150-fb5ad093ef1e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:22:19 crc kubenswrapper[4820]: I0203 12:22:19.049753 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd3be9c9-9970-4055-b150-fb5ad093ef1e-kube-api-access-5dpm9" (OuterVolumeSpecName: "kube-api-access-5dpm9") pod "dd3be9c9-9970-4055-b150-fb5ad093ef1e" (UID: "dd3be9c9-9970-4055-b150-fb5ad093ef1e"). InnerVolumeSpecName "kube-api-access-5dpm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:22:19 crc kubenswrapper[4820]: I0203 12:22:19.068119 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd3be9c9-9970-4055-b150-fb5ad093ef1e-util" (OuterVolumeSpecName: "util") pod "dd3be9c9-9970-4055-b150-fb5ad093ef1e" (UID: "dd3be9c9-9970-4055-b150-fb5ad093ef1e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:22:19 crc kubenswrapper[4820]: I0203 12:22:19.144238 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dpm9\" (UniqueName: \"kubernetes.io/projected/dd3be9c9-9970-4055-b150-fb5ad093ef1e-kube-api-access-5dpm9\") on node \"crc\" DevicePath \"\"" Feb 03 12:22:19 crc kubenswrapper[4820]: I0203 12:22:19.144288 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd3be9c9-9970-4055-b150-fb5ad093ef1e-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:22:19 crc kubenswrapper[4820]: I0203 12:22:19.144300 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd3be9c9-9970-4055-b150-fb5ad093ef1e-util\") on node \"crc\" DevicePath \"\"" Feb 03 12:22:19 crc kubenswrapper[4820]: I0203 12:22:19.654841 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" event={"ID":"dd3be9c9-9970-4055-b150-fb5ad093ef1e","Type":"ContainerDied","Data":"ec73da1a8602936d7ca27e1a340c2afccbb871600a18241cc0d2c86442ed771d"} Feb 03 12:22:19 crc kubenswrapper[4820]: I0203 12:22:19.654904 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec73da1a8602936d7ca27e1a340c2afccbb871600a18241cc0d2c86442ed771d" Feb 03 12:22:19 crc kubenswrapper[4820]: I0203 12:22:19.654926 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh" Feb 03 12:22:22 crc kubenswrapper[4820]: I0203 12:22:22.496329 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gcnrh"] Feb 03 12:22:22 crc kubenswrapper[4820]: E0203 12:22:22.496921 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3be9c9-9970-4055-b150-fb5ad093ef1e" containerName="extract" Feb 03 12:22:22 crc kubenswrapper[4820]: I0203 12:22:22.496935 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3be9c9-9970-4055-b150-fb5ad093ef1e" containerName="extract" Feb 03 12:22:22 crc kubenswrapper[4820]: E0203 12:22:22.496956 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3be9c9-9970-4055-b150-fb5ad093ef1e" containerName="pull" Feb 03 12:22:22 crc kubenswrapper[4820]: I0203 12:22:22.496963 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3be9c9-9970-4055-b150-fb5ad093ef1e" containerName="pull" Feb 03 12:22:22 crc kubenswrapper[4820]: E0203 12:22:22.496978 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3be9c9-9970-4055-b150-fb5ad093ef1e" containerName="util" Feb 03 12:22:22 crc kubenswrapper[4820]: I0203 12:22:22.496985 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3be9c9-9970-4055-b150-fb5ad093ef1e" containerName="util" Feb 03 12:22:22 crc kubenswrapper[4820]: I0203 12:22:22.497115 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd3be9c9-9970-4055-b150-fb5ad093ef1e" containerName="extract" Feb 03 12:22:22 crc kubenswrapper[4820]: I0203 12:22:22.497605 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-gcnrh" Feb 03 12:22:22 crc kubenswrapper[4820]: I0203 12:22:22.499617 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 03 12:22:22 crc kubenswrapper[4820]: I0203 12:22:22.500795 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 03 12:22:22 crc kubenswrapper[4820]: I0203 12:22:22.501467 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ddx5c" Feb 03 12:22:22 crc kubenswrapper[4820]: I0203 12:22:22.509051 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gcnrh"] Feb 03 12:22:22 crc kubenswrapper[4820]: I0203 12:22:22.667231 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwrmt\" (UniqueName: \"kubernetes.io/projected/3cc69a01-8e9a-4d98-9568-841c499eb0f0-kube-api-access-dwrmt\") pod \"nmstate-operator-646758c888-gcnrh\" (UID: \"3cc69a01-8e9a-4d98-9568-841c499eb0f0\") " pod="openshift-nmstate/nmstate-operator-646758c888-gcnrh" Feb 03 12:22:22 crc kubenswrapper[4820]: I0203 12:22:22.769008 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwrmt\" (UniqueName: \"kubernetes.io/projected/3cc69a01-8e9a-4d98-9568-841c499eb0f0-kube-api-access-dwrmt\") pod \"nmstate-operator-646758c888-gcnrh\" (UID: \"3cc69a01-8e9a-4d98-9568-841c499eb0f0\") " pod="openshift-nmstate/nmstate-operator-646758c888-gcnrh" Feb 03 12:22:22 crc kubenswrapper[4820]: I0203 12:22:22.974832 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwrmt\" (UniqueName: \"kubernetes.io/projected/3cc69a01-8e9a-4d98-9568-841c499eb0f0-kube-api-access-dwrmt\") pod \"nmstate-operator-646758c888-gcnrh\" (UID: \"3cc69a01-8e9a-4d98-9568-841c499eb0f0\") " pod="openshift-nmstate/nmstate-operator-646758c888-gcnrh" Feb 03 12:22:23 crc kubenswrapper[4820]: I0203 12:22:23.114239 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-gcnrh" Feb 03 12:22:23 crc kubenswrapper[4820]: I0203 12:22:23.359421 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gcnrh"] Feb 03 12:22:23 crc kubenswrapper[4820]: I0203 12:22:23.680943 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-gcnrh" event={"ID":"3cc69a01-8e9a-4d98-9568-841c499eb0f0","Type":"ContainerStarted","Data":"467c34abd79e64d740b13863ceac2482c5e6a10cc8865ecb6a2709d268026182"} Feb 03 12:22:26 crc kubenswrapper[4820]: I0203 12:22:26.702536 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-gcnrh" event={"ID":"3cc69a01-8e9a-4d98-9568-841c499eb0f0","Type":"ContainerStarted","Data":"0abab53dbe1a6c3c1a8afbfe33f7dbaeaba25cfd538aa69640f89533ce364403"} Feb 03 12:22:26 crc kubenswrapper[4820]: I0203 12:22:26.733930 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-gcnrh" podStartSLOduration=1.816185127 podStartE2EDuration="4.733869229s" podCreationTimestamp="2026-02-03 12:22:22 +0000 UTC" firstStartedPulling="2026-02-03 12:22:23.372766619 +0000 UTC m=+1060.895842473" lastFinishedPulling="2026-02-03 12:22:26.290450711 +0000 UTC m=+1063.813526575" observedRunningTime="2026-02-03 12:22:26.727088876 +0000 UTC m=+1064.250164760" watchObservedRunningTime="2026-02-03 12:22:26.733869229 +0000 UTC m=+1064.256945103" Feb 03 12:22:32 crc kubenswrapper[4820]: I0203 12:22:32.710858 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vprcr"] Feb 03 12:22:32 crc kubenswrapper[4820]: I0203 12:22:32.712805 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-vprcr" Feb 03 12:22:32 crc kubenswrapper[4820]: I0203 12:22:32.720000 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-brvpb" Feb 03 12:22:32 crc kubenswrapper[4820]: I0203 12:22:32.730481 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vprcr"] Feb 03 12:22:32 crc kubenswrapper[4820]: I0203 12:22:32.736631 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr"] Feb 03 12:22:32 crc kubenswrapper[4820]: I0203 12:22:32.738541 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr" Feb 03 12:22:32 crc kubenswrapper[4820]: I0203 12:22:32.856077 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk6h4\" (UniqueName: \"kubernetes.io/projected/25a587ed-7ff6-4ffd-b2ad-5a88a81c7867-kube-api-access-vk6h4\") pod \"nmstate-metrics-54757c584b-vprcr\" (UID: \"25a587ed-7ff6-4ffd-b2ad-5a88a81c7867\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vprcr" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:32.856147 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75x5t\" (UniqueName: \"kubernetes.io/projected/23a0cc00-e454-4afc-82bb-0d79c0b76324-kube-api-access-75x5t\") pod \"nmstate-webhook-8474b5b9d8-2tnxr\" (UID: \"23a0cc00-e454-4afc-82bb-0d79c0b76324\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:32.856238 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/23a0cc00-e454-4afc-82bb-0d79c0b76324-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2tnxr\" (UID: \"23a0cc00-e454-4afc-82bb-0d79c0b76324\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.029761 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 03 12:22:33 crc kubenswrapper[4820]: E0203 12:22:33.031113 4820 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 03 12:22:33 crc kubenswrapper[4820]: E0203 12:22:33.031210 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23a0cc00-e454-4afc-82bb-0d79c0b76324-tls-key-pair podName:23a0cc00-e454-4afc-82bb-0d79c0b76324 nodeName:}" failed. No retries permitted until 2026-02-03 12:22:33.531184392 +0000 UTC m=+1071.054260266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/23a0cc00-e454-4afc-82bb-0d79c0b76324-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-2tnxr" (UID: "23a0cc00-e454-4afc-82bb-0d79c0b76324") : secret "openshift-nmstate-webhook" not found Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:32.958075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/23a0cc00-e454-4afc-82bb-0d79c0b76324-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2tnxr\" (UID: \"23a0cc00-e454-4afc-82bb-0d79c0b76324\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.031744 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk6h4\" (UniqueName: \"kubernetes.io/projected/25a587ed-7ff6-4ffd-b2ad-5a88a81c7867-kube-api-access-vk6h4\") pod \"nmstate-metrics-54757c584b-vprcr\" (UID: \"25a587ed-7ff6-4ffd-b2ad-5a88a81c7867\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vprcr" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.031795 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75x5t\" (UniqueName: \"kubernetes.io/projected/23a0cc00-e454-4afc-82bb-0d79c0b76324-kube-api-access-75x5t\") pod \"nmstate-webhook-8474b5b9d8-2tnxr\" (UID: \"23a0cc00-e454-4afc-82bb-0d79c0b76324\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.056389 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-sbsh5"] Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.057523 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.069218 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr"] Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.090046 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75x5t\" (UniqueName: \"kubernetes.io/projected/23a0cc00-e454-4afc-82bb-0d79c0b76324-kube-api-access-75x5t\") pod \"nmstate-webhook-8474b5b9d8-2tnxr\" (UID: \"23a0cc00-e454-4afc-82bb-0d79c0b76324\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.093030 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk6h4\" (UniqueName: \"kubernetes.io/projected/25a587ed-7ff6-4ffd-b2ad-5a88a81c7867-kube-api-access-vk6h4\") pod \"nmstate-metrics-54757c584b-vprcr\" (UID: \"25a587ed-7ff6-4ffd-b2ad-5a88a81c7867\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vprcr" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.235805 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4k7v\" (UniqueName: \"kubernetes.io/projected/afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3-kube-api-access-f4k7v\") pod \"nmstate-handler-sbsh5\" (UID: \"afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3\") " pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.236097 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3-dbus-socket\") pod \"nmstate-handler-sbsh5\" (UID: \"afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3\") " pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.236410 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3-ovs-socket\") pod \"nmstate-handler-sbsh5\" (UID: \"afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3\") " pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.236474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3-nmstate-lock\") pod \"nmstate-handler-sbsh5\" (UID: \"afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3\") " pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.274920 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62"] Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.276011 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.278622 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.280132 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-tft92" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.282063 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.296569 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62"] Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.333351 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-vprcr" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.337319 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3f652654-b0e0-47f3-b1db-9930c6b681c6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-hcd62\" (UID: \"3f652654-b0e0-47f3-b1db-9930c6b681c6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.337398 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4k7v\" (UniqueName: \"kubernetes.io/projected/afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3-kube-api-access-f4k7v\") pod \"nmstate-handler-sbsh5\" (UID: \"afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3\") " pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.337429 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3-dbus-socket\") pod \"nmstate-handler-sbsh5\" (UID: \"afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3\") " pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.337474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmpx4\" (UniqueName: \"kubernetes.io/projected/3f652654-b0e0-47f3-b1db-9930c6b681c6-kube-api-access-zmpx4\") pod \"nmstate-console-plugin-7754f76f8b-hcd62\" (UID: \"3f652654-b0e0-47f3-b1db-9930c6b681c6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.337517 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3-ovs-socket\") pod \"nmstate-handler-sbsh5\" (UID: \"afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3\") " pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.337541 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3-nmstate-lock\") pod \"nmstate-handler-sbsh5\" (UID: \"afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3\") " pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.337591 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f652654-b0e0-47f3-b1db-9930c6b681c6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-hcd62\" (UID: \"3f652654-b0e0-47f3-b1db-9930c6b681c6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.338086 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3-ovs-socket\") pod \"nmstate-handler-sbsh5\" (UID: \"afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3\") " pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.338137 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3-nmstate-lock\") pod \"nmstate-handler-sbsh5\" (UID: \"afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3\") " pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.338362 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3-dbus-socket\") pod \"nmstate-handler-sbsh5\" (UID: \"afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3\") " pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.361804 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4k7v\" (UniqueName: \"kubernetes.io/projected/afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3-kube-api-access-f4k7v\") pod \"nmstate-handler-sbsh5\" (UID: \"afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3\") " pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.390446 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.438338 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmpx4\" (UniqueName: \"kubernetes.io/projected/3f652654-b0e0-47f3-b1db-9930c6b681c6-kube-api-access-zmpx4\") pod \"nmstate-console-plugin-7754f76f8b-hcd62\" (UID: \"3f652654-b0e0-47f3-b1db-9930c6b681c6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.438837 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f652654-b0e0-47f3-b1db-9930c6b681c6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-hcd62\" (UID: \"3f652654-b0e0-47f3-b1db-9930c6b681c6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" Feb 03 12:22:33 crc kubenswrapper[4820]: E0203 12:22:33.438949 4820 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 03 12:22:33 crc kubenswrapper[4820]: E0203 12:22:33.438996 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f652654-b0e0-47f3-b1db-9930c6b681c6-plugin-serving-cert podName:3f652654-b0e0-47f3-b1db-9930c6b681c6 nodeName:}" failed. No retries permitted until 2026-02-03 12:22:33.938983858 +0000 UTC m=+1071.462059722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/3f652654-b0e0-47f3-b1db-9930c6b681c6-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-hcd62" (UID: "3f652654-b0e0-47f3-b1db-9930c6b681c6") : secret "plugin-serving-cert" not found Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.439173 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3f652654-b0e0-47f3-b1db-9930c6b681c6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-hcd62\" (UID: \"3f652654-b0e0-47f3-b1db-9930c6b681c6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.440383 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3f652654-b0e0-47f3-b1db-9930c6b681c6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-hcd62\" (UID: \"3f652654-b0e0-47f3-b1db-9930c6b681c6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.458598 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmpx4\" (UniqueName: \"kubernetes.io/projected/3f652654-b0e0-47f3-b1db-9930c6b681c6-kube-api-access-zmpx4\") pod \"nmstate-console-plugin-7754f76f8b-hcd62\" (UID: \"3f652654-b0e0-47f3-b1db-9930c6b681c6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.490135 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d574cd74d-c5zfm"] Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.491020 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.519347 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d574cd74d-c5zfm"] Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.541996 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/23a0cc00-e454-4afc-82bb-0d79c0b76324-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2tnxr\" (UID: \"23a0cc00-e454-4afc-82bb-0d79c0b76324\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.546540 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/23a0cc00-e454-4afc-82bb-0d79c0b76324-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2tnxr\" (UID: \"23a0cc00-e454-4afc-82bb-0d79c0b76324\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.643817 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2f196666-08b9-4107-a5b5-76f477a0d441-console-config\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.643915 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f196666-08b9-4107-a5b5-76f477a0d441-console-serving-cert\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.643951 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2f196666-08b9-4107-a5b5-76f477a0d441-console-oauth-config\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.643976 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2f196666-08b9-4107-a5b5-76f477a0d441-oauth-serving-cert\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.644093 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f196666-08b9-4107-a5b5-76f477a0d441-trusted-ca-bundle\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.644125 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p69g2\" (UniqueName: \"kubernetes.io/projected/2f196666-08b9-4107-a5b5-76f477a0d441-kube-api-access-p69g2\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.644218 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2f196666-08b9-4107-a5b5-76f477a0d441-service-ca\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.661365 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.747858 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2f196666-08b9-4107-a5b5-76f477a0d441-service-ca\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.747958 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2f196666-08b9-4107-a5b5-76f477a0d441-console-config\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.748025 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f196666-08b9-4107-a5b5-76f477a0d441-console-serving-cert\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.748067 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2f196666-08b9-4107-a5b5-76f477a0d441-console-oauth-config\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.748096 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2f196666-08b9-4107-a5b5-76f477a0d441-oauth-serving-cert\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.748165 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f196666-08b9-4107-a5b5-76f477a0d441-trusted-ca-bundle\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.748200 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p69g2\" (UniqueName: \"kubernetes.io/projected/2f196666-08b9-4107-a5b5-76f477a0d441-kube-api-access-p69g2\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.750178 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2f196666-08b9-4107-a5b5-76f477a0d441-service-ca\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.751085 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2f196666-08b9-4107-a5b5-76f477a0d441-console-config\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.751857 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2f196666-08b9-4107-a5b5-76f477a0d441-oauth-serving-cert\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.752841 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f196666-08b9-4107-a5b5-76f477a0d441-trusted-ca-bundle\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.772525 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f196666-08b9-4107-a5b5-76f477a0d441-console-serving-cert\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.821556 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2f196666-08b9-4107-a5b5-76f477a0d441-console-oauth-config\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.839861 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p69g2\" (UniqueName: \"kubernetes.io/projected/2f196666-08b9-4107-a5b5-76f477a0d441-kube-api-access-p69g2\") pod \"console-5d574cd74d-c5zfm\" (UID: \"2f196666-08b9-4107-a5b5-76f477a0d441\") " pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.949450 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f652654-b0e0-47f3-b1db-9930c6b681c6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-hcd62\" (UID: \"3f652654-b0e0-47f3-b1db-9930c6b681c6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" Feb 03 12:22:33 crc kubenswrapper[4820]: I0203 12:22:33.953489 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f652654-b0e0-47f3-b1db-9930c6b681c6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-hcd62\" (UID: \"3f652654-b0e0-47f3-b1db-9930c6b681c6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" Feb 03 12:22:34 crc kubenswrapper[4820]: I0203 12:22:34.087608 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vprcr"] Feb 03 12:22:34 crc kubenswrapper[4820]: I0203 12:22:34.094172 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sbsh5" event={"ID":"afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3","Type":"ContainerStarted","Data":"31fd6cf52dafdd0d958cc923e320bd3781e61c243ae0617abdee2be0966792ed"} Feb 03 12:22:34 crc kubenswrapper[4820]: I0203 12:22:34.099258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vprcr" event={"ID":"25a587ed-7ff6-4ffd-b2ad-5a88a81c7867","Type":"ContainerStarted","Data":"6ba0a5d3219e51497db64602c073932df03a2112198d9b7d4fc09e33a98c8448"} Feb 03 12:22:34 crc kubenswrapper[4820]: I0203 12:22:34.118484 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:34 crc kubenswrapper[4820]: I0203 12:22:34.197294 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" Feb 03 12:22:34 crc kubenswrapper[4820]: I0203 12:22:34.362827 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr"] Feb 03 12:22:34 crc kubenswrapper[4820]: W0203 12:22:34.389127 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23a0cc00_e454_4afc_82bb_0d79c0b76324.slice/crio-ec9cac1f16864c576dc76be8e440adea9ef6a487ed948a51d9444d4bf99a9a53 WatchSource:0}: Error finding container ec9cac1f16864c576dc76be8e440adea9ef6a487ed948a51d9444d4bf99a9a53: Status 404 returned error can't find the container with id ec9cac1f16864c576dc76be8e440adea9ef6a487ed948a51d9444d4bf99a9a53 Feb 03 12:22:34 crc kubenswrapper[4820]: I0203 12:22:34.551109 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d574cd74d-c5zfm"] Feb 03 12:22:34 crc kubenswrapper[4820]: W0203 12:22:34.562086 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f196666_08b9_4107_a5b5_76f477a0d441.slice/crio-09b3b9d4a4848c64a93eb2599fb2e263123724e0547bcf40736cc529bbd46b02 WatchSource:0}: Error finding container 09b3b9d4a4848c64a93eb2599fb2e263123724e0547bcf40736cc529bbd46b02: Status 404 returned error can't find the container with id 09b3b9d4a4848c64a93eb2599fb2e263123724e0547bcf40736cc529bbd46b02 Feb 03 12:22:34 crc kubenswrapper[4820]: I0203 12:22:34.787294 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62"] Feb 03 12:22:34 crc kubenswrapper[4820]: W0203 12:22:34.796571 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f652654_b0e0_47f3_b1db_9930c6b681c6.slice/crio-b4933be1a6747bed1c6ac88053b1e99fd88c78511beefb674a8c26c4ec861bc7 WatchSource:0}: Error finding container b4933be1a6747bed1c6ac88053b1e99fd88c78511beefb674a8c26c4ec861bc7: Status 404 returned error can't find the container with id b4933be1a6747bed1c6ac88053b1e99fd88c78511beefb674a8c26c4ec861bc7 Feb 03 12:22:35 crc kubenswrapper[4820]: I0203 12:22:35.190337 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d574cd74d-c5zfm" event={"ID":"2f196666-08b9-4107-a5b5-76f477a0d441","Type":"ContainerStarted","Data":"56f053d20c3f334a19cefab08cdbb3dd90371ef5c05f8fd6ab4953195284353f"} Feb 03 12:22:35 crc kubenswrapper[4820]: I0203 12:22:35.190735 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d574cd74d-c5zfm" event={"ID":"2f196666-08b9-4107-a5b5-76f477a0d441","Type":"ContainerStarted","Data":"09b3b9d4a4848c64a93eb2599fb2e263123724e0547bcf40736cc529bbd46b02"} Feb 03 12:22:35 crc kubenswrapper[4820]: I0203 12:22:35.191365 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr" event={"ID":"23a0cc00-e454-4afc-82bb-0d79c0b76324","Type":"ContainerStarted","Data":"ec9cac1f16864c576dc76be8e440adea9ef6a487ed948a51d9444d4bf99a9a53"} Feb 03 12:22:35 crc kubenswrapper[4820]: I0203 12:22:35.192286 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" event={"ID":"3f652654-b0e0-47f3-b1db-9930c6b681c6","Type":"ContainerStarted","Data":"b4933be1a6747bed1c6ac88053b1e99fd88c78511beefb674a8c26c4ec861bc7"} Feb 03 12:22:39 crc kubenswrapper[4820]: I0203 12:22:39.441501 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr" event={"ID":"23a0cc00-e454-4afc-82bb-0d79c0b76324","Type":"ContainerStarted","Data":"37c782e43bd3d482c54d7c028b838e46722a0219e937c43025a1146f13fba0ef"} Feb 03 12:22:39 crc kubenswrapper[4820]: I0203 12:22:39.442165 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr" Feb 03 12:22:39 crc kubenswrapper[4820]: I0203 12:22:39.444274 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vprcr" event={"ID":"25a587ed-7ff6-4ffd-b2ad-5a88a81c7867","Type":"ContainerStarted","Data":"a895b7b65dffbb4b24267d05733b824e015db17733cc3951e4a374d5c97e3cd0"} Feb 03 12:22:39 crc kubenswrapper[4820]: I0203 12:22:39.445506 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" event={"ID":"3f652654-b0e0-47f3-b1db-9930c6b681c6","Type":"ContainerStarted","Data":"281159630d819798ce182256a0afbe337515db2cd54f183ad92ee4eb82e9c63d"} Feb 03 12:22:39 crc kubenswrapper[4820]: I0203 12:22:39.447911 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sbsh5" event={"ID":"afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3","Type":"ContainerStarted","Data":"feec0f2594ea9da03a00b4204a5639ed56718844b835bb63630e163ece12ce40"} Feb 03 12:22:39 crc kubenswrapper[4820]: I0203 12:22:39.448044 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:39 crc kubenswrapper[4820]: I0203 12:22:39.464669 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr" podStartSLOduration=3.594422342 podStartE2EDuration="7.464649007s" podCreationTimestamp="2026-02-03 12:22:32 +0000 UTC" firstStartedPulling="2026-02-03 12:22:34.394335887 +0000 UTC m=+1071.917411751" lastFinishedPulling="2026-02-03 12:22:38.264562562 +0000 UTC m=+1075.787638416" observedRunningTime="2026-02-03 12:22:39.45954408 +0000 UTC m=+1076.982619954" watchObservedRunningTime="2026-02-03 12:22:39.464649007 +0000 UTC m=+1076.987724871" Feb 03 12:22:39 crc kubenswrapper[4820]: I0203 12:22:39.467717 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d574cd74d-c5zfm" podStartSLOduration=6.46770598 podStartE2EDuration="6.46770598s" podCreationTimestamp="2026-02-03 12:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:22:35.255459438 +0000 UTC m=+1072.778535302" watchObservedRunningTime="2026-02-03 12:22:39.46770598 +0000 UTC m=+1076.990781844" Feb 03 12:22:39 crc kubenswrapper[4820]: I0203 12:22:39.517426 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-sbsh5" podStartSLOduration=2.744628917 podStartE2EDuration="7.517408164s" podCreationTimestamp="2026-02-03 12:22:32 +0000 UTC" firstStartedPulling="2026-02-03 12:22:33.459531293 +0000 UTC m=+1070.982607157" lastFinishedPulling="2026-02-03 12:22:38.23231054 +0000 UTC m=+1075.755386404" observedRunningTime="2026-02-03 12:22:39.516390406 +0000 UTC m=+1077.039466300" watchObservedRunningTime="2026-02-03 12:22:39.517408164 +0000 UTC m=+1077.040484028" Feb 03 12:22:39 crc kubenswrapper[4820]: I0203 12:22:39.551590 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hcd62" podStartSLOduration=3.117979237 podStartE2EDuration="6.551560647s" podCreationTimestamp="2026-02-03 12:22:33 +0000 UTC" firstStartedPulling="2026-02-03 12:22:34.799199052 +0000 UTC m=+1072.322274916" lastFinishedPulling="2026-02-03 12:22:38.232780472 +0000 UTC m=+1075.755856326" observedRunningTime="2026-02-03 12:22:39.549116282 +0000 UTC m=+1077.072192166" watchObservedRunningTime="2026-02-03 12:22:39.551560647 +0000 UTC m=+1077.074636521" Feb 03 12:22:42 crc kubenswrapper[4820]: I0203 12:22:42.501498 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vprcr" event={"ID":"25a587ed-7ff6-4ffd-b2ad-5a88a81c7867","Type":"ContainerStarted","Data":"43ab2408af2e6f94986f9c02ca5c37d3688d6313bb2168cc9abddc1eabdc5580"} Feb 03 12:22:43 crc kubenswrapper[4820]: I0203 12:22:43.415764 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-sbsh5" Feb 03 12:22:43 crc kubenswrapper[4820]: I0203 12:22:43.433598 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-vprcr" podStartSLOduration=3.558366227 podStartE2EDuration="11.433575572s" podCreationTimestamp="2026-02-03 12:22:32 +0000 UTC" firstStartedPulling="2026-02-03 12:22:34.073709398 +0000 UTC m=+1071.596785262" lastFinishedPulling="2026-02-03 12:22:41.948918743 +0000 UTC m=+1079.471994607" observedRunningTime="2026-02-03 12:22:42.542557752 +0000 UTC m=+1080.065633616" watchObservedRunningTime="2026-02-03 12:22:43.433575572 +0000 UTC m=+1080.956651436" Feb 03 12:22:44 crc kubenswrapper[4820]: I0203 12:22:44.118823 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:44 crc kubenswrapper[4820]: I0203 12:22:44.119860 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:44 crc kubenswrapper[4820]: I0203 12:22:44.125099 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:44 crc kubenswrapper[4820]: I0203 12:22:44.561399 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d574cd74d-c5zfm" Feb 03 12:22:44 crc kubenswrapper[4820]: I0203 12:22:44.626071 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tw2nt"] Feb 03 12:22:53 crc kubenswrapper[4820]: I0203 12:22:53.669188 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2tnxr" Feb 03 12:23:08 crc kubenswrapper[4820]: I0203 12:23:08.824978 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx"] Feb 03 12:23:08 crc kubenswrapper[4820]: I0203 12:23:08.827310 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" Feb 03 12:23:08 crc kubenswrapper[4820]: I0203 12:23:08.832393 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 12:23:08 crc kubenswrapper[4820]: I0203 12:23:08.841805 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da1615c1-bd74-4ac2-91ca-4a00a31366e6-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx\" (UID: \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" Feb 03 12:23:08 crc kubenswrapper[4820]: I0203 12:23:08.841908 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drssk\" (UniqueName: \"kubernetes.io/projected/da1615c1-bd74-4ac2-91ca-4a00a31366e6-kube-api-access-drssk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx\" (UID: \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" Feb 03 12:23:08 crc kubenswrapper[4820]: I0203 12:23:08.841947 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da1615c1-bd74-4ac2-91ca-4a00a31366e6-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx\" (UID: \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" Feb 03 12:23:08 crc kubenswrapper[4820]: I0203 12:23:08.849391 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx"] Feb 03 12:23:08 crc kubenswrapper[4820]: I0203 12:23:08.943407 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da1615c1-bd74-4ac2-91ca-4a00a31366e6-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx\" (UID: \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" Feb 03 12:23:08 crc kubenswrapper[4820]: I0203 12:23:08.943653 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drssk\" (UniqueName: \"kubernetes.io/projected/da1615c1-bd74-4ac2-91ca-4a00a31366e6-kube-api-access-drssk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx\" (UID: \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" Feb 03 12:23:08 crc kubenswrapper[4820]: I0203 12:23:08.943685 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da1615c1-bd74-4ac2-91ca-4a00a31366e6-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx\" (UID: \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" Feb 03 12:23:08 crc kubenswrapper[4820]: I0203 12:23:08.944121 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da1615c1-bd74-4ac2-91ca-4a00a31366e6-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx\" (UID: \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" Feb 03 12:23:08 crc kubenswrapper[4820]: I0203 12:23:08.944155 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da1615c1-bd74-4ac2-91ca-4a00a31366e6-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx\" (UID: \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" Feb 03 12:23:08 crc kubenswrapper[4820]: I0203 12:23:08.986027 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drssk\" (UniqueName: \"kubernetes.io/projected/da1615c1-bd74-4ac2-91ca-4a00a31366e6-kube-api-access-drssk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx\" (UID: \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" Feb 03 12:23:09 crc kubenswrapper[4820]: I0203 12:23:09.151313 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" Feb 03 12:23:09 crc kubenswrapper[4820]: I0203 12:23:09.408099 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx"] Feb 03 12:23:09 crc kubenswrapper[4820]: I0203 12:23:09.669103 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" event={"ID":"da1615c1-bd74-4ac2-91ca-4a00a31366e6","Type":"ContainerStarted","Data":"cd42e807c79abd939fadf894458bdd4f0d56708a48ac14cebbc26783ffa5a638"} Feb 03 12:23:09 crc kubenswrapper[4820]: I0203 12:23:09.689063 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-tw2nt" podUID="b06753a3-652a-4acc-b294-3ccaa5b0cb99" containerName="console" containerID="cri-o://81295f2fff4c34ff47ab98869ba2efd1664b5586bdc415d379e316b46aa29a2a" gracePeriod=15 Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.246847 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tw2nt_b06753a3-652a-4acc-b294-3ccaa5b0cb99/console/0.log" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.247136 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.363290 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8frp\" (UniqueName: \"kubernetes.io/projected/b06753a3-652a-4acc-b294-3ccaa5b0cb99-kube-api-access-c8frp\") pod \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.363352 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-trusted-ca-bundle\") pod \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.363471 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-oauth-config\") pod \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.363502 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-oauth-serving-cert\") pod \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.363546 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-service-ca\") pod \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.363572 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-config\") pod \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.363615 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-serving-cert\") pod \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\" (UID: \"b06753a3-652a-4acc-b294-3ccaa5b0cb99\") " Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.365641 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-config" (OuterVolumeSpecName: "console-config") pod "b06753a3-652a-4acc-b294-3ccaa5b0cb99" (UID: "b06753a3-652a-4acc-b294-3ccaa5b0cb99"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.365729 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b06753a3-652a-4acc-b294-3ccaa5b0cb99" (UID: "b06753a3-652a-4acc-b294-3ccaa5b0cb99"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.365672 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-service-ca" (OuterVolumeSpecName: "service-ca") pod "b06753a3-652a-4acc-b294-3ccaa5b0cb99" (UID: "b06753a3-652a-4acc-b294-3ccaa5b0cb99"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.366073 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b06753a3-652a-4acc-b294-3ccaa5b0cb99" (UID: "b06753a3-652a-4acc-b294-3ccaa5b0cb99"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.370400 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b06753a3-652a-4acc-b294-3ccaa5b0cb99" (UID: "b06753a3-652a-4acc-b294-3ccaa5b0cb99"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.370593 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b06753a3-652a-4acc-b294-3ccaa5b0cb99" (UID: "b06753a3-652a-4acc-b294-3ccaa5b0cb99"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.370729 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b06753a3-652a-4acc-b294-3ccaa5b0cb99-kube-api-access-c8frp" (OuterVolumeSpecName: "kube-api-access-c8frp") pod "b06753a3-652a-4acc-b294-3ccaa5b0cb99" (UID: "b06753a3-652a-4acc-b294-3ccaa5b0cb99"). InnerVolumeSpecName "kube-api-access-c8frp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.465694 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8frp\" (UniqueName: \"kubernetes.io/projected/b06753a3-652a-4acc-b294-3ccaa5b0cb99-kube-api-access-c8frp\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.465751 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.465771 4820 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.465788 4820 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.465807 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.465823 4820 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.465841 4820 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b06753a3-652a-4acc-b294-3ccaa5b0cb99-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.676182 4820 generic.go:334] "Generic (PLEG): container finished" podID="da1615c1-bd74-4ac2-91ca-4a00a31366e6" containerID="6402cc7f9b90ee7a6b47295d78a6ae2adc492c04a29bae2623b3aefbc96a691c" exitCode=0 Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.676254 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" event={"ID":"da1615c1-bd74-4ac2-91ca-4a00a31366e6","Type":"ContainerDied","Data":"6402cc7f9b90ee7a6b47295d78a6ae2adc492c04a29bae2623b3aefbc96a691c"} Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.678066 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tw2nt_b06753a3-652a-4acc-b294-3ccaa5b0cb99/console/0.log" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.678098 4820 generic.go:334] "Generic (PLEG): container finished" podID="b06753a3-652a-4acc-b294-3ccaa5b0cb99" containerID="81295f2fff4c34ff47ab98869ba2efd1664b5586bdc415d379e316b46aa29a2a" exitCode=2 Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.678116 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tw2nt" event={"ID":"b06753a3-652a-4acc-b294-3ccaa5b0cb99","Type":"ContainerDied","Data":"81295f2fff4c34ff47ab98869ba2efd1664b5586bdc415d379e316b46aa29a2a"} Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.678135 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tw2nt" event={"ID":"b06753a3-652a-4acc-b294-3ccaa5b0cb99","Type":"ContainerDied","Data":"7ba84dcbcc9cf553c89692e751e7595ff3e12e510dcf499433f8238f212c4c13"} Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.678155 4820 scope.go:117] "RemoveContainer" containerID="81295f2fff4c34ff47ab98869ba2efd1664b5586bdc415d379e316b46aa29a2a" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.678289 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tw2nt" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.716678 4820 scope.go:117] "RemoveContainer" containerID="81295f2fff4c34ff47ab98869ba2efd1664b5586bdc415d379e316b46aa29a2a" Feb 03 12:23:10 crc kubenswrapper[4820]: E0203 12:23:10.720041 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81295f2fff4c34ff47ab98869ba2efd1664b5586bdc415d379e316b46aa29a2a\": container with ID starting with 81295f2fff4c34ff47ab98869ba2efd1664b5586bdc415d379e316b46aa29a2a not found: ID does not exist" containerID="81295f2fff4c34ff47ab98869ba2efd1664b5586bdc415d379e316b46aa29a2a" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.720098 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81295f2fff4c34ff47ab98869ba2efd1664b5586bdc415d379e316b46aa29a2a"} err="failed to get container status \"81295f2fff4c34ff47ab98869ba2efd1664b5586bdc415d379e316b46aa29a2a\": rpc error: code = NotFound desc = could not find container \"81295f2fff4c34ff47ab98869ba2efd1664b5586bdc415d379e316b46aa29a2a\": container with ID starting with 81295f2fff4c34ff47ab98869ba2efd1664b5586bdc415d379e316b46aa29a2a not found: ID does not exist" Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.724656 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tw2nt"] Feb 03 12:23:10 crc kubenswrapper[4820]: I0203 12:23:10.729688 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-tw2nt"] Feb 03 12:23:10 crc kubenswrapper[4820]: E0203 12:23:10.813610 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb06753a3_652a_4acc_b294_3ccaa5b0cb99.slice/crio-7ba84dcbcc9cf553c89692e751e7595ff3e12e510dcf499433f8238f212c4c13\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb06753a3_652a_4acc_b294_3ccaa5b0cb99.slice\": RecentStats: unable to find data in memory cache]" Feb 03 12:23:11 crc kubenswrapper[4820]: I0203 12:23:11.150787 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b06753a3-652a-4acc-b294-3ccaa5b0cb99" path="/var/lib/kubelet/pods/b06753a3-652a-4acc-b294-3ccaa5b0cb99/volumes" Feb 03 12:23:12 crc kubenswrapper[4820]: I0203 12:23:12.693693 4820 generic.go:334] "Generic (PLEG): container finished" podID="da1615c1-bd74-4ac2-91ca-4a00a31366e6" containerID="05545161e95de460b352e1c69d8262a950601e919f9e00e1c9a8e62ba7053986" exitCode=0 Feb 03 12:23:12 crc kubenswrapper[4820]: I0203 12:23:12.693779 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" event={"ID":"da1615c1-bd74-4ac2-91ca-4a00a31366e6","Type":"ContainerDied","Data":"05545161e95de460b352e1c69d8262a950601e919f9e00e1c9a8e62ba7053986"} Feb 03 12:23:13 crc kubenswrapper[4820]: I0203 12:23:13.703417 4820 generic.go:334] "Generic (PLEG): container finished" podID="da1615c1-bd74-4ac2-91ca-4a00a31366e6" containerID="8bc54b3ac189cb3ee070826794c459490f69fdfd8617e469a79ea832d23b5376" exitCode=0 Feb 03 12:23:13 crc kubenswrapper[4820]: I0203 12:23:13.703489 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" event={"ID":"da1615c1-bd74-4ac2-91ca-4a00a31366e6","Type":"ContainerDied","Data":"8bc54b3ac189cb3ee070826794c459490f69fdfd8617e469a79ea832d23b5376"} Feb 03 12:23:14 crc kubenswrapper[4820]: I0203 12:23:14.958728 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" Feb 03 12:23:15 crc kubenswrapper[4820]: I0203 12:23:15.124869 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da1615c1-bd74-4ac2-91ca-4a00a31366e6-util\") pod \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\" (UID: \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\") " Feb 03 12:23:15 crc kubenswrapper[4820]: I0203 12:23:15.124959 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drssk\" (UniqueName: \"kubernetes.io/projected/da1615c1-bd74-4ac2-91ca-4a00a31366e6-kube-api-access-drssk\") pod \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\" (UID: \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\") " Feb 03 12:23:15 crc kubenswrapper[4820]: I0203 12:23:15.125000 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da1615c1-bd74-4ac2-91ca-4a00a31366e6-bundle\") pod \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\" (UID: \"da1615c1-bd74-4ac2-91ca-4a00a31366e6\") " Feb 03 12:23:15 crc kubenswrapper[4820]: I0203 12:23:15.126684 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da1615c1-bd74-4ac2-91ca-4a00a31366e6-bundle" (OuterVolumeSpecName: "bundle") pod "da1615c1-bd74-4ac2-91ca-4a00a31366e6" (UID: "da1615c1-bd74-4ac2-91ca-4a00a31366e6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:23:15 crc kubenswrapper[4820]: I0203 12:23:15.131270 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da1615c1-bd74-4ac2-91ca-4a00a31366e6-kube-api-access-drssk" (OuterVolumeSpecName: "kube-api-access-drssk") pod "da1615c1-bd74-4ac2-91ca-4a00a31366e6" (UID: "da1615c1-bd74-4ac2-91ca-4a00a31366e6"). InnerVolumeSpecName "kube-api-access-drssk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:15 crc kubenswrapper[4820]: I0203 12:23:15.161916 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da1615c1-bd74-4ac2-91ca-4a00a31366e6-util" (OuterVolumeSpecName: "util") pod "da1615c1-bd74-4ac2-91ca-4a00a31366e6" (UID: "da1615c1-bd74-4ac2-91ca-4a00a31366e6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:23:15 crc kubenswrapper[4820]: I0203 12:23:15.227193 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da1615c1-bd74-4ac2-91ca-4a00a31366e6-util\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:15 crc kubenswrapper[4820]: I0203 12:23:15.227250 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drssk\" (UniqueName: \"kubernetes.io/projected/da1615c1-bd74-4ac2-91ca-4a00a31366e6-kube-api-access-drssk\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:15 crc kubenswrapper[4820]: I0203 12:23:15.227265 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da1615c1-bd74-4ac2-91ca-4a00a31366e6-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:15 crc kubenswrapper[4820]: I0203 12:23:15.720119 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" event={"ID":"da1615c1-bd74-4ac2-91ca-4a00a31366e6","Type":"ContainerDied","Data":"cd42e807c79abd939fadf894458bdd4f0d56708a48ac14cebbc26783ffa5a638"} Feb 03 12:23:15 crc kubenswrapper[4820]: I0203 12:23:15.720159 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd42e807c79abd939fadf894458bdd4f0d56708a48ac14cebbc26783ffa5a638" Feb 03 12:23:15 crc kubenswrapper[4820]: I0203 12:23:15.720182 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.076754 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v"] Feb 03 12:23:24 crc kubenswrapper[4820]: E0203 12:23:24.077509 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da1615c1-bd74-4ac2-91ca-4a00a31366e6" containerName="pull" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.077549 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="da1615c1-bd74-4ac2-91ca-4a00a31366e6" containerName="pull" Feb 03 12:23:24 crc kubenswrapper[4820]: E0203 12:23:24.077564 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06753a3-652a-4acc-b294-3ccaa5b0cb99" containerName="console" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.077570 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06753a3-652a-4acc-b294-3ccaa5b0cb99" containerName="console" Feb 03 12:23:24 crc kubenswrapper[4820]: E0203 12:23:24.077589 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da1615c1-bd74-4ac2-91ca-4a00a31366e6" containerName="extract" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.077596 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="da1615c1-bd74-4ac2-91ca-4a00a31366e6" containerName="extract" Feb 03 12:23:24 crc kubenswrapper[4820]: E0203 12:23:24.077604 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da1615c1-bd74-4ac2-91ca-4a00a31366e6" containerName="util" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.077609 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="da1615c1-bd74-4ac2-91ca-4a00a31366e6" containerName="util" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.077740 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="da1615c1-bd74-4ac2-91ca-4a00a31366e6" containerName="extract" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.077755 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06753a3-652a-4acc-b294-3ccaa5b0cb99" containerName="console" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.078193 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.080728 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.080823 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-hn2zv" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.080915 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.081865 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.085850 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.107919 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v"] Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.274604 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/15d57aea-1890-4499-9c6b-ab4af2e3715c-apiservice-cert\") pod \"metallb-operator-controller-manager-7cbbb967bd-w5q2v\" (UID: \"15d57aea-1890-4499-9c6b-ab4af2e3715c\") " pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.274661 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfhh9\" (UniqueName: \"kubernetes.io/projected/15d57aea-1890-4499-9c6b-ab4af2e3715c-kube-api-access-cfhh9\") pod \"metallb-operator-controller-manager-7cbbb967bd-w5q2v\" (UID: \"15d57aea-1890-4499-9c6b-ab4af2e3715c\") " pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.274785 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/15d57aea-1890-4499-9c6b-ab4af2e3715c-webhook-cert\") pod \"metallb-operator-controller-manager-7cbbb967bd-w5q2v\" (UID: \"15d57aea-1890-4499-9c6b-ab4af2e3715c\") " pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.376335 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/15d57aea-1890-4499-9c6b-ab4af2e3715c-apiservice-cert\") pod \"metallb-operator-controller-manager-7cbbb967bd-w5q2v\" (UID: \"15d57aea-1890-4499-9c6b-ab4af2e3715c\") " pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.376388 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfhh9\" (UniqueName: \"kubernetes.io/projected/15d57aea-1890-4499-9c6b-ab4af2e3715c-kube-api-access-cfhh9\") pod \"metallb-operator-controller-manager-7cbbb967bd-w5q2v\" (UID: \"15d57aea-1890-4499-9c6b-ab4af2e3715c\") " pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.376450 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/15d57aea-1890-4499-9c6b-ab4af2e3715c-webhook-cert\") pod \"metallb-operator-controller-manager-7cbbb967bd-w5q2v\" (UID: \"15d57aea-1890-4499-9c6b-ab4af2e3715c\") " pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.383219 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/15d57aea-1890-4499-9c6b-ab4af2e3715c-apiservice-cert\") pod \"metallb-operator-controller-manager-7cbbb967bd-w5q2v\" (UID: \"15d57aea-1890-4499-9c6b-ab4af2e3715c\") " pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.386481 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/15d57aea-1890-4499-9c6b-ab4af2e3715c-webhook-cert\") pod \"metallb-operator-controller-manager-7cbbb967bd-w5q2v\" (UID: \"15d57aea-1890-4499-9c6b-ab4af2e3715c\") " pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.394382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfhh9\" (UniqueName: \"kubernetes.io/projected/15d57aea-1890-4499-9c6b-ab4af2e3715c-kube-api-access-cfhh9\") pod \"metallb-operator-controller-manager-7cbbb967bd-w5q2v\" (UID: \"15d57aea-1890-4499-9c6b-ab4af2e3715c\") " pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.394770 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.442400 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w"] Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.443208 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.449282 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.449457 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-d2k8k" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.449740 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.468752 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w"] Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.501818 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50906228-b0d7-4552-916a-b4a010b7b346-webhook-cert\") pod \"metallb-operator-webhook-server-84b6f7d797-4wm8w\" (UID: \"50906228-b0d7-4552-916a-b4a010b7b346\") " pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.501874 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50906228-b0d7-4552-916a-b4a010b7b346-apiservice-cert\") pod \"metallb-operator-webhook-server-84b6f7d797-4wm8w\" (UID: \"50906228-b0d7-4552-916a-b4a010b7b346\") " pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.502010 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k85mw\" (UniqueName: \"kubernetes.io/projected/50906228-b0d7-4552-916a-b4a010b7b346-kube-api-access-k85mw\") pod \"metallb-operator-webhook-server-84b6f7d797-4wm8w\" (UID: \"50906228-b0d7-4552-916a-b4a010b7b346\") " pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.604238 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50906228-b0d7-4552-916a-b4a010b7b346-webhook-cert\") pod \"metallb-operator-webhook-server-84b6f7d797-4wm8w\" (UID: \"50906228-b0d7-4552-916a-b4a010b7b346\") " pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.604297 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50906228-b0d7-4552-916a-b4a010b7b346-apiservice-cert\") pod \"metallb-operator-webhook-server-84b6f7d797-4wm8w\" (UID: \"50906228-b0d7-4552-916a-b4a010b7b346\") " pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.604352 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k85mw\" (UniqueName: \"kubernetes.io/projected/50906228-b0d7-4552-916a-b4a010b7b346-kube-api-access-k85mw\") pod \"metallb-operator-webhook-server-84b6f7d797-4wm8w\" (UID: \"50906228-b0d7-4552-916a-b4a010b7b346\") " pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.610621 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50906228-b0d7-4552-916a-b4a010b7b346-webhook-cert\") pod \"metallb-operator-webhook-server-84b6f7d797-4wm8w\" (UID: \"50906228-b0d7-4552-916a-b4a010b7b346\") " pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.627808 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50906228-b0d7-4552-916a-b4a010b7b346-apiservice-cert\") pod \"metallb-operator-webhook-server-84b6f7d797-4wm8w\" (UID: \"50906228-b0d7-4552-916a-b4a010b7b346\") " pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.633763 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k85mw\" (UniqueName: \"kubernetes.io/projected/50906228-b0d7-4552-916a-b4a010b7b346-kube-api-access-k85mw\") pod \"metallb-operator-webhook-server-84b6f7d797-4wm8w\" (UID: \"50906228-b0d7-4552-916a-b4a010b7b346\") " pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.814537 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" Feb 03 12:23:24 crc kubenswrapper[4820]: I0203 12:23:24.846091 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v"] Feb 03 12:23:24 crc kubenswrapper[4820]: W0203 12:23:24.862137 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15d57aea_1890_4499_9c6b_ab4af2e3715c.slice/crio-61a36bde829653e193b8c7d122ff5929d49a81b1cc81486b23d3411f9194e120 WatchSource:0}: Error finding container 61a36bde829653e193b8c7d122ff5929d49a81b1cc81486b23d3411f9194e120: Status 404 returned error can't find the container with id 61a36bde829653e193b8c7d122ff5929d49a81b1cc81486b23d3411f9194e120 Feb 03 12:23:25 crc kubenswrapper[4820]: I0203 12:23:25.243845 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w"] Feb 03 12:23:25 crc kubenswrapper[4820]: W0203 12:23:25.247103 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50906228_b0d7_4552_916a_b4a010b7b346.slice/crio-e7699ea31f5bf5cec0ed03d59505bb3a08df2af13a830a825ecbb80a9ded07b1 WatchSource:0}: Error finding container e7699ea31f5bf5cec0ed03d59505bb3a08df2af13a830a825ecbb80a9ded07b1: Status 404 returned error can't find the container with id e7699ea31f5bf5cec0ed03d59505bb3a08df2af13a830a825ecbb80a9ded07b1 Feb 03 12:23:25 crc kubenswrapper[4820]: I0203 12:23:25.804317 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" event={"ID":"50906228-b0d7-4552-916a-b4a010b7b346","Type":"ContainerStarted","Data":"e7699ea31f5bf5cec0ed03d59505bb3a08df2af13a830a825ecbb80a9ded07b1"} Feb 03 12:23:25 crc kubenswrapper[4820]: I0203 12:23:25.806335 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" event={"ID":"15d57aea-1890-4499-9c6b-ab4af2e3715c","Type":"ContainerStarted","Data":"61a36bde829653e193b8c7d122ff5929d49a81b1cc81486b23d3411f9194e120"} Feb 03 12:23:34 crc kubenswrapper[4820]: I0203 12:23:34.918427 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" event={"ID":"50906228-b0d7-4552-916a-b4a010b7b346","Type":"ContainerStarted","Data":"6fcfaa97aae1503fae559217d3a9226f2123645049684e387eb6b9238a741392"} Feb 03 12:23:34 crc kubenswrapper[4820]: I0203 12:23:34.919043 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" Feb 03 12:23:34 crc kubenswrapper[4820]: I0203 12:23:34.921240 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" event={"ID":"15d57aea-1890-4499-9c6b-ab4af2e3715c","Type":"ContainerStarted","Data":"b83d68418ba7bb915987eacbc5b1a39ead07d84069d231de6f97e64e6d6b4379"} Feb 03 12:23:34 crc kubenswrapper[4820]: I0203 12:23:34.921388 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" Feb 03 12:23:34 crc kubenswrapper[4820]: I0203 12:23:34.942961 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" podStartSLOduration=2.4106570290000002 podStartE2EDuration="10.942942268s" podCreationTimestamp="2026-02-03 12:23:24 +0000 UTC" firstStartedPulling="2026-02-03 12:23:25.250107692 +0000 UTC m=+1122.773183556" lastFinishedPulling="2026-02-03 12:23:33.782392931 +0000 UTC m=+1131.305468795" observedRunningTime="2026-02-03 12:23:34.940222374 +0000 UTC m=+1132.463298238" watchObservedRunningTime="2026-02-03 12:23:34.942942268 +0000 UTC m=+1132.466018142" Feb 03 12:23:34 crc kubenswrapper[4820]: I0203 12:23:34.961679 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" podStartSLOduration=2.064022557 podStartE2EDuration="10.961661913s" podCreationTimestamp="2026-02-03 12:23:24 +0000 UTC" firstStartedPulling="2026-02-03 12:23:24.866317436 +0000 UTC m=+1122.389393300" lastFinishedPulling="2026-02-03 12:23:33.763956792 +0000 UTC m=+1131.287032656" observedRunningTime="2026-02-03 12:23:34.959536365 +0000 UTC m=+1132.482612229" watchObservedRunningTime="2026-02-03 12:23:34.961661913 +0000 UTC m=+1132.484737777" Feb 03 12:23:44 crc kubenswrapper[4820]: I0203 12:23:44.826173 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-84b6f7d797-4wm8w" Feb 03 12:24:01 crc kubenswrapper[4820]: I0203 12:24:01.388304 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:24:01 crc kubenswrapper[4820]: I0203 12:24:01.388995 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:24:04 crc kubenswrapper[4820]: I0203 12:24:04.398280 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7cbbb967bd-w5q2v" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.270903 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m"] Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.272197 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.279278 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-48tvq"] Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.282312 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.295986 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m"] Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.409025 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.409238 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-mxrdb" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.409637 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.412075 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.445000 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-metrics-certs\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.445098 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-reloader\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.445125 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-frr-conf\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.445149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11969ac0-96d5-4195-bfe8-f619e11db963-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-d9c5m\" (UID: \"11969ac0-96d5-4195-bfe8-f619e11db963\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.445193 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-frr-sockets\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.445237 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-metrics\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.445269 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n9sc\" (UniqueName: \"kubernetes.io/projected/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-kube-api-access-2n9sc\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.445302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-frr-startup\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.445456 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pktc6\" (UniqueName: \"kubernetes.io/projected/11969ac0-96d5-4195-bfe8-f619e11db963-kube-api-access-pktc6\") pod \"frr-k8s-webhook-server-7df86c4f6c-d9c5m\" (UID: \"11969ac0-96d5-4195-bfe8-f619e11db963\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.521782 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-scj8c"] Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.523120 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-scj8c" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.531197 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.531310 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.531871 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.532253 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-vschs" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.543369 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-bl7d9"] Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.544616 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.548137 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.548714 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-metrics-certs\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.548775 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-reloader\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.548821 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-frr-conf\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.549064 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11969ac0-96d5-4195-bfe8-f619e11db963-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-d9c5m\" (UID: \"11969ac0-96d5-4195-bfe8-f619e11db963\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m" Feb 03 12:24:05 crc kubenswrapper[4820]: E0203 12:24:05.549110 4820 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.549157 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-frr-sockets\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: E0203 12:24:05.549218 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-metrics-certs podName:21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0 nodeName:}" failed. No retries permitted until 2026-02-03 12:24:06.049192717 +0000 UTC m=+1163.572268641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-metrics-certs") pod "frr-k8s-48tvq" (UID: "21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0") : secret "frr-k8s-certs-secret" not found Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.549253 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-metrics\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.549302 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n9sc\" (UniqueName: \"kubernetes.io/projected/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-kube-api-access-2n9sc\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.549339 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-frr-startup\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.549384 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pktc6\" (UniqueName: \"kubernetes.io/projected/11969ac0-96d5-4195-bfe8-f619e11db963-kube-api-access-pktc6\") pod \"frr-k8s-webhook-server-7df86c4f6c-d9c5m\" (UID: \"11969ac0-96d5-4195-bfe8-f619e11db963\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.549681 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-frr-sockets\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.549783 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-frr-conf\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.549940 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-reloader\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.550355 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-metrics\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.550773 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-frr-startup\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.559055 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11969ac0-96d5-4195-bfe8-f619e11db963-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-d9c5m\" (UID: \"11969ac0-96d5-4195-bfe8-f619e11db963\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.577575 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pktc6\" (UniqueName: \"kubernetes.io/projected/11969ac0-96d5-4195-bfe8-f619e11db963-kube-api-access-pktc6\") pod \"frr-k8s-webhook-server-7df86c4f6c-d9c5m\" (UID: \"11969ac0-96d5-4195-bfe8-f619e11db963\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.579339 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-bl7d9"] Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.583443 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n9sc\" (UniqueName: \"kubernetes.io/projected/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-kube-api-access-2n9sc\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.651252 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzgck\" (UniqueName: \"kubernetes.io/projected/8bc51efb-561f-4e59-960c-99f18a5ef7d8-kube-api-access-kzgck\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.651304 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-metrics-certs\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.651470 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sfd2\" (UniqueName: \"kubernetes.io/projected/a8856687-50aa-469b-acca-0c2e83d3a95a-kube-api-access-9sfd2\") pod \"controller-6968d8fdc4-bl7d9\" (UID: \"a8856687-50aa-469b-acca-0c2e83d3a95a\") " pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.651533 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8856687-50aa-469b-acca-0c2e83d3a95a-cert\") pod \"controller-6968d8fdc4-bl7d9\" (UID: \"a8856687-50aa-469b-acca-0c2e83d3a95a\") " pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.651604 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-memberlist\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.651656 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8856687-50aa-469b-acca-0c2e83d3a95a-metrics-certs\") pod \"controller-6968d8fdc4-bl7d9\" (UID: \"a8856687-50aa-469b-acca-0c2e83d3a95a\") " pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.651681 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8bc51efb-561f-4e59-960c-99f18a5ef7d8-metallb-excludel2\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.724841 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.752641 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzgck\" (UniqueName: \"kubernetes.io/projected/8bc51efb-561f-4e59-960c-99f18a5ef7d8-kube-api-access-kzgck\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.752691 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-metrics-certs\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.752735 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sfd2\" (UniqueName: \"kubernetes.io/projected/a8856687-50aa-469b-acca-0c2e83d3a95a-kube-api-access-9sfd2\") pod \"controller-6968d8fdc4-bl7d9\" (UID: \"a8856687-50aa-469b-acca-0c2e83d3a95a\") " pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.752769 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8856687-50aa-469b-acca-0c2e83d3a95a-cert\") pod \"controller-6968d8fdc4-bl7d9\" (UID: \"a8856687-50aa-469b-acca-0c2e83d3a95a\") " pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.752792 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-memberlist\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.752822 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8856687-50aa-469b-acca-0c2e83d3a95a-metrics-certs\") pod \"controller-6968d8fdc4-bl7d9\" (UID: \"a8856687-50aa-469b-acca-0c2e83d3a95a\") " pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.752843 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8bc51efb-561f-4e59-960c-99f18a5ef7d8-metallb-excludel2\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:05 crc kubenswrapper[4820]: E0203 12:24:05.752968 4820 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 03 12:24:05 crc kubenswrapper[4820]: E0203 12:24:05.753049 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-memberlist podName:8bc51efb-561f-4e59-960c-99f18a5ef7d8 nodeName:}" failed. No retries permitted until 2026-02-03 12:24:06.253029998 +0000 UTC m=+1163.776105862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-memberlist") pod "speaker-scj8c" (UID: "8bc51efb-561f-4e59-960c-99f18a5ef7d8") : secret "metallb-memberlist" not found Feb 03 12:24:05 crc kubenswrapper[4820]: E0203 12:24:05.753102 4820 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 03 12:24:05 crc kubenswrapper[4820]: E0203 12:24:05.753160 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8856687-50aa-469b-acca-0c2e83d3a95a-metrics-certs podName:a8856687-50aa-469b-acca-0c2e83d3a95a nodeName:}" failed. No retries permitted until 2026-02-03 12:24:06.253143832 +0000 UTC m=+1163.776219696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a8856687-50aa-469b-acca-0c2e83d3a95a-metrics-certs") pod "controller-6968d8fdc4-bl7d9" (UID: "a8856687-50aa-469b-acca-0c2e83d3a95a") : secret "controller-certs-secret" not found Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.753763 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8bc51efb-561f-4e59-960c-99f18a5ef7d8-metallb-excludel2\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.756546 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-metrics-certs\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.761416 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 03 12:24:05 crc kubenswrapper[4820]: I0203 12:24:05.768289 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8856687-50aa-469b-acca-0c2e83d3a95a-cert\") pod \"controller-6968d8fdc4-bl7d9\" (UID: \"a8856687-50aa-469b-acca-0c2e83d3a95a\") " pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:06 crc kubenswrapper[4820]: I0203 12:24:06.027285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzgck\" (UniqueName: \"kubernetes.io/projected/8bc51efb-561f-4e59-960c-99f18a5ef7d8-kube-api-access-kzgck\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:06 crc kubenswrapper[4820]: I0203 12:24:06.041019 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sfd2\" (UniqueName: \"kubernetes.io/projected/a8856687-50aa-469b-acca-0c2e83d3a95a-kube-api-access-9sfd2\") pod \"controller-6968d8fdc4-bl7d9\" (UID: \"a8856687-50aa-469b-acca-0c2e83d3a95a\") " pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:06 crc kubenswrapper[4820]: I0203 12:24:06.108744 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-metrics-certs\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:06 crc kubenswrapper[4820]: I0203 12:24:06.115052 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0-metrics-certs\") pod \"frr-k8s-48tvq\" (UID: \"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0\") " pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:06 crc kubenswrapper[4820]: I0203 12:24:06.322484 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-memberlist\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:06 crc kubenswrapper[4820]: I0203 12:24:06.322802 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8856687-50aa-469b-acca-0c2e83d3a95a-metrics-certs\") pod \"controller-6968d8fdc4-bl7d9\" (UID: \"a8856687-50aa-469b-acca-0c2e83d3a95a\") " pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:06 crc kubenswrapper[4820]: E0203 12:24:06.324667 4820 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 03 12:24:06 crc kubenswrapper[4820]: E0203 12:24:06.324717 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-memberlist podName:8bc51efb-561f-4e59-960c-99f18a5ef7d8 nodeName:}" failed. No retries permitted until 2026-02-03 12:24:07.324701664 +0000 UTC m=+1164.847777528 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-memberlist") pod "speaker-scj8c" (UID: "8bc51efb-561f-4e59-960c-99f18a5ef7d8") : secret "metallb-memberlist" not found Feb 03 12:24:06 crc kubenswrapper[4820]: E0203 12:24:06.325157 4820 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 03 12:24:06 crc kubenswrapper[4820]: E0203 12:24:06.325189 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8856687-50aa-469b-acca-0c2e83d3a95a-metrics-certs podName:a8856687-50aa-469b-acca-0c2e83d3a95a nodeName:}" failed. No retries permitted until 2026-02-03 12:24:07.325179477 +0000 UTC m=+1164.848255341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a8856687-50aa-469b-acca-0c2e83d3a95a-metrics-certs") pod "controller-6968d8fdc4-bl7d9" (UID: "a8856687-50aa-469b-acca-0c2e83d3a95a") : secret "controller-certs-secret" not found Feb 03 12:24:06 crc kubenswrapper[4820]: I0203 12:24:06.336961 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:06 crc kubenswrapper[4820]: I0203 12:24:06.474884 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m"] Feb 03 12:24:07 crc kubenswrapper[4820]: I0203 12:24:07.353758 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-memberlist\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:07 crc kubenswrapper[4820]: I0203 12:24:07.353830 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8856687-50aa-469b-acca-0c2e83d3a95a-metrics-certs\") pod \"controller-6968d8fdc4-bl7d9\" (UID: \"a8856687-50aa-469b-acca-0c2e83d3a95a\") " pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:07 crc kubenswrapper[4820]: E0203 12:24:07.353949 4820 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 03 12:24:07 crc kubenswrapper[4820]: E0203 12:24:07.354016 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-memberlist podName:8bc51efb-561f-4e59-960c-99f18a5ef7d8 nodeName:}" failed. No retries permitted until 2026-02-03 12:24:09.353999222 +0000 UTC m=+1166.877075076 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-memberlist") pod "speaker-scj8c" (UID: "8bc51efb-561f-4e59-960c-99f18a5ef7d8") : secret "metallb-memberlist" not found Feb 03 12:24:07 crc kubenswrapper[4820]: I0203 12:24:07.361487 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8856687-50aa-469b-acca-0c2e83d3a95a-metrics-certs\") pod \"controller-6968d8fdc4-bl7d9\" (UID: \"a8856687-50aa-469b-acca-0c2e83d3a95a\") " pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:07 crc kubenswrapper[4820]: I0203 12:24:07.418782 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:07 crc kubenswrapper[4820]: I0203 12:24:07.472999 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48tvq" event={"ID":"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0","Type":"ContainerStarted","Data":"0fc2d421fbc6c14f0976df451d4a07e6f8b584d076d3655166b6fca9333e212e"} Feb 03 12:24:07 crc kubenswrapper[4820]: I0203 12:24:07.474469 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m" event={"ID":"11969ac0-96d5-4195-bfe8-f619e11db963","Type":"ContainerStarted","Data":"5b7910b5b355973df244b859e387708a27bb4bc99c912f62757ed4edfebe333d"} Feb 03 12:24:07 crc kubenswrapper[4820]: I0203 12:24:07.947679 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-bl7d9"] Feb 03 12:24:08 crc kubenswrapper[4820]: I0203 12:24:08.678068 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bl7d9" event={"ID":"a8856687-50aa-469b-acca-0c2e83d3a95a","Type":"ContainerStarted","Data":"3b30fd41ae28402963721d6d8b39b27f03175f220a36e0cdcd51b2f748f5c5e6"} Feb 03 12:24:08 crc kubenswrapper[4820]: I0203 12:24:08.678384 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bl7d9" event={"ID":"a8856687-50aa-469b-acca-0c2e83d3a95a","Type":"ContainerStarted","Data":"24f2ac4ffe88b130d60214778abba6e21f16a96560617294bce812fa28a5cf5a"} Feb 03 12:24:08 crc kubenswrapper[4820]: I0203 12:24:08.678395 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bl7d9" event={"ID":"a8856687-50aa-469b-acca-0c2e83d3a95a","Type":"ContainerStarted","Data":"ff29087be4e85f8e0857e75dac276baa9ef16f51c7d0ff96af3795b3ef6c05eb"} Feb 03 12:24:08 crc kubenswrapper[4820]: I0203 12:24:08.679690 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:08 crc kubenswrapper[4820]: I0203 12:24:08.729283 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-bl7d9" podStartSLOduration=3.729257393 podStartE2EDuration="3.729257393s" podCreationTimestamp="2026-02-03 12:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:08.728357869 +0000 UTC m=+1166.251433763" watchObservedRunningTime="2026-02-03 12:24:08.729257393 +0000 UTC m=+1166.252333257" Feb 03 12:24:09 crc kubenswrapper[4820]: I0203 12:24:09.371982 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-memberlist\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:09 crc kubenswrapper[4820]: I0203 12:24:09.377847 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8bc51efb-561f-4e59-960c-99f18a5ef7d8-memberlist\") pod \"speaker-scj8c\" (UID: \"8bc51efb-561f-4e59-960c-99f18a5ef7d8\") " pod="metallb-system/speaker-scj8c" Feb 03 12:24:09 crc kubenswrapper[4820]: I0203 12:24:09.439284 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-scj8c" Feb 03 12:24:09 crc kubenswrapper[4820]: W0203 12:24:09.505414 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bc51efb_561f_4e59_960c_99f18a5ef7d8.slice/crio-52bf72a156b417c2aaf7405a71ed506f4c437843bc76a4d41c847eb3c21e772e WatchSource:0}: Error finding container 52bf72a156b417c2aaf7405a71ed506f4c437843bc76a4d41c847eb3c21e772e: Status 404 returned error can't find the container with id 52bf72a156b417c2aaf7405a71ed506f4c437843bc76a4d41c847eb3c21e772e Feb 03 12:24:09 crc kubenswrapper[4820]: I0203 12:24:09.707754 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-scj8c" event={"ID":"8bc51efb-561f-4e59-960c-99f18a5ef7d8","Type":"ContainerStarted","Data":"52bf72a156b417c2aaf7405a71ed506f4c437843bc76a4d41c847eb3c21e772e"} Feb 03 12:24:10 crc kubenswrapper[4820]: I0203 12:24:10.719280 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-scj8c" event={"ID":"8bc51efb-561f-4e59-960c-99f18a5ef7d8","Type":"ContainerStarted","Data":"0d059a81569f355b3984427dec8309eddb483565d16017220e84419acf2073b6"} Feb 03 12:24:11 crc kubenswrapper[4820]: I0203 12:24:11.740308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-scj8c" event={"ID":"8bc51efb-561f-4e59-960c-99f18a5ef7d8","Type":"ContainerStarted","Data":"41596f708e6ccf7a514a03d122e464cb822d9371477ef87d90b32eda377011a7"} Feb 03 12:24:11 crc kubenswrapper[4820]: I0203 12:24:11.740968 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-scj8c" Feb 03 12:24:11 crc kubenswrapper[4820]: I0203 12:24:11.769017 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-scj8c" podStartSLOduration=6.768995485 podStartE2EDuration="6.768995485s" podCreationTimestamp="2026-02-03 12:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:11.766499478 +0000 UTC m=+1169.289575352" watchObservedRunningTime="2026-02-03 12:24:11.768995485 +0000 UTC m=+1169.292071369" Feb 03 12:24:18 crc kubenswrapper[4820]: I0203 12:24:18.837253 4820 generic.go:334] "Generic (PLEG): container finished" podID="21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0" containerID="6d3b1d9ef07446544aa19231d2fe5d0c9bb61bcda480138a9eec8340bd88d451" exitCode=0 Feb 03 12:24:18 crc kubenswrapper[4820]: I0203 12:24:18.837324 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48tvq" event={"ID":"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0","Type":"ContainerDied","Data":"6d3b1d9ef07446544aa19231d2fe5d0c9bb61bcda480138a9eec8340bd88d451"} Feb 03 12:24:18 crc kubenswrapper[4820]: I0203 12:24:18.855289 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m" event={"ID":"11969ac0-96d5-4195-bfe8-f619e11db963","Type":"ContainerStarted","Data":"601e826db8555aed64c1a3781a4f2a866c1baa4640e5206a7483c80f1248ce1e"} Feb 03 12:24:18 crc kubenswrapper[4820]: I0203 12:24:18.855488 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m" Feb 03 12:24:18 crc kubenswrapper[4820]: I0203 12:24:18.900327 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m" podStartSLOduration=2.129491473 podStartE2EDuration="13.900306811s" podCreationTimestamp="2026-02-03 12:24:05 +0000 UTC" firstStartedPulling="2026-02-03 12:24:06.487119785 +0000 UTC m=+1164.010195639" lastFinishedPulling="2026-02-03 12:24:18.257935103 +0000 UTC m=+1175.781010977" observedRunningTime="2026-02-03 12:24:18.896126678 +0000 UTC m=+1176.419202562" watchObservedRunningTime="2026-02-03 12:24:18.900306811 +0000 UTC m=+1176.423382675" Feb 03 12:24:19 crc kubenswrapper[4820]: I0203 12:24:19.865319 4820 generic.go:334] "Generic (PLEG): container finished" podID="21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0" containerID="6e8455e01c96457d0e6def64c2b0d084a37fdf77bff9ce8b81878dc18f3965b5" exitCode=0 Feb 03 12:24:19 crc kubenswrapper[4820]: I0203 12:24:19.865441 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48tvq" event={"ID":"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0","Type":"ContainerDied","Data":"6e8455e01c96457d0e6def64c2b0d084a37fdf77bff9ce8b81878dc18f3965b5"} Feb 03 12:24:20 crc kubenswrapper[4820]: I0203 12:24:20.876533 4820 generic.go:334] "Generic (PLEG): container finished" podID="21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0" containerID="ee20ad7dd942c09d494239473c2fd175e321d47db419cfa83bc1b02f25c984f9" exitCode=0 Feb 03 12:24:20 crc kubenswrapper[4820]: I0203 12:24:20.876607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48tvq" event={"ID":"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0","Type":"ContainerDied","Data":"ee20ad7dd942c09d494239473c2fd175e321d47db419cfa83bc1b02f25c984f9"} Feb 03 12:24:21 crc kubenswrapper[4820]: I0203 12:24:21.898686 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48tvq" event={"ID":"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0","Type":"ContainerStarted","Data":"d951c9dfdfd56bbdf2ee01efde8c3d047a11396700031743b8db149347fbc2bf"} Feb 03 12:24:21 crc kubenswrapper[4820]: I0203 12:24:21.899022 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48tvq" event={"ID":"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0","Type":"ContainerStarted","Data":"3ffa1e7c643bd2a84479975bedc53c096e7bdd9244aac9917955d5426fd12e15"} Feb 03 12:24:21 crc kubenswrapper[4820]: I0203 12:24:21.899037 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48tvq" event={"ID":"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0","Type":"ContainerStarted","Data":"61826f7326727b15c065467df65338dc9650a5e05eb62ec603062acc990ad133"} Feb 03 12:24:21 crc kubenswrapper[4820]: I0203 12:24:21.899046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48tvq" event={"ID":"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0","Type":"ContainerStarted","Data":"9736504013c99e643fa508b2c7cdbb72e3f26cbd177e40a5c50584050fe88f4d"} Feb 03 12:24:22 crc kubenswrapper[4820]: I0203 12:24:22.911109 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48tvq" event={"ID":"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0","Type":"ContainerStarted","Data":"6dfa571a2f078f30c77a3b4db6982af8293109c83d0255c83764cfa30bafdf4d"} Feb 03 12:24:22 crc kubenswrapper[4820]: I0203 12:24:22.911167 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48tvq" event={"ID":"21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0","Type":"ContainerStarted","Data":"62a0a3e6a35544c31e9592ad26e74d63019587c666888723bba499aeba08467e"} Feb 03 12:24:22 crc kubenswrapper[4820]: I0203 12:24:22.912176 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:22 crc kubenswrapper[4820]: I0203 12:24:22.939649 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-48tvq" podStartSLOduration=6.208427359 podStartE2EDuration="17.939605632s" podCreationTimestamp="2026-02-03 12:24:05 +0000 UTC" firstStartedPulling="2026-02-03 12:24:06.499380906 +0000 UTC m=+1164.022456770" lastFinishedPulling="2026-02-03 12:24:18.230559179 +0000 UTC m=+1175.753635043" observedRunningTime="2026-02-03 12:24:22.933932508 +0000 UTC m=+1180.457008402" watchObservedRunningTime="2026-02-03 12:24:22.939605632 +0000 UTC m=+1180.462681496" Feb 03 12:24:26 crc kubenswrapper[4820]: I0203 12:24:26.338069 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:26 crc kubenswrapper[4820]: I0203 12:24:26.386708 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:27 crc kubenswrapper[4820]: I0203 12:24:27.428036 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-bl7d9" Feb 03 12:24:29 crc kubenswrapper[4820]: I0203 12:24:29.445311 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-scj8c" Feb 03 12:24:31 crc kubenswrapper[4820]: I0203 12:24:31.366374 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:24:31 crc kubenswrapper[4820]: I0203 12:24:31.366448 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:24:32 crc kubenswrapper[4820]: I0203 12:24:32.925750 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wm9x4"] Feb 03 12:24:32 crc kubenswrapper[4820]: I0203 12:24:32.926861 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wm9x4" Feb 03 12:24:32 crc kubenswrapper[4820]: I0203 12:24:32.930675 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-26hzr" Feb 03 12:24:32 crc kubenswrapper[4820]: I0203 12:24:32.931067 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 03 12:24:32 crc kubenswrapper[4820]: I0203 12:24:32.933030 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 03 12:24:32 crc kubenswrapper[4820]: I0203 12:24:32.949434 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wm9x4"] Feb 03 12:24:33 crc kubenswrapper[4820]: I0203 12:24:33.109783 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtxdn\" (UniqueName: \"kubernetes.io/projected/ca81d0d7-9e45-4d78-a14e-0296c34a17ef-kube-api-access-dtxdn\") pod \"openstack-operator-index-wm9x4\" (UID: \"ca81d0d7-9e45-4d78-a14e-0296c34a17ef\") " pod="openstack-operators/openstack-operator-index-wm9x4" Feb 03 12:24:33 crc kubenswrapper[4820]: I0203 12:24:33.211569 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtxdn\" (UniqueName: \"kubernetes.io/projected/ca81d0d7-9e45-4d78-a14e-0296c34a17ef-kube-api-access-dtxdn\") pod \"openstack-operator-index-wm9x4\" (UID: \"ca81d0d7-9e45-4d78-a14e-0296c34a17ef\") " pod="openstack-operators/openstack-operator-index-wm9x4" Feb 03 12:24:33 crc kubenswrapper[4820]: I0203 12:24:33.233435 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtxdn\" (UniqueName: \"kubernetes.io/projected/ca81d0d7-9e45-4d78-a14e-0296c34a17ef-kube-api-access-dtxdn\") pod \"openstack-operator-index-wm9x4\" (UID: \"ca81d0d7-9e45-4d78-a14e-0296c34a17ef\") " pod="openstack-operators/openstack-operator-index-wm9x4" Feb 03 12:24:33 crc kubenswrapper[4820]: I0203 12:24:33.246348 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wm9x4" Feb 03 12:24:33 crc kubenswrapper[4820]: I0203 12:24:33.889162 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wm9x4"] Feb 03 12:24:33 crc kubenswrapper[4820]: W0203 12:24:33.895053 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca81d0d7_9e45_4d78_a14e_0296c34a17ef.slice/crio-bbde1174d10f990a1dd2da125585fd467a39c165166166e87154d0f29893c24d WatchSource:0}: Error finding container bbde1174d10f990a1dd2da125585fd467a39c165166166e87154d0f29893c24d: Status 404 returned error can't find the container with id bbde1174d10f990a1dd2da125585fd467a39c165166166e87154d0f29893c24d Feb 03 12:24:34 crc kubenswrapper[4820]: I0203 12:24:34.023851 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wm9x4" event={"ID":"ca81d0d7-9e45-4d78-a14e-0296c34a17ef","Type":"ContainerStarted","Data":"bbde1174d10f990a1dd2da125585fd467a39c165166166e87154d0f29893c24d"} Feb 03 12:24:35 crc kubenswrapper[4820]: I0203 12:24:35.302560 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-wm9x4"] Feb 03 12:24:35 crc kubenswrapper[4820]: I0203 12:24:35.707137 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bpd2f"] Feb 03 12:24:35 crc kubenswrapper[4820]: I0203 12:24:35.708207 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bpd2f" Feb 03 12:24:35 crc kubenswrapper[4820]: I0203 12:24:35.721219 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bpd2f"] Feb 03 12:24:35 crc kubenswrapper[4820]: I0203 12:24:35.735216 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-d9c5m" Feb 03 12:24:35 crc kubenswrapper[4820]: I0203 12:24:35.852999 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db6cq\" (UniqueName: \"kubernetes.io/projected/5432ffce-4da9-4a8a-9738-4e9dd0ee9a6a-kube-api-access-db6cq\") pod \"openstack-operator-index-bpd2f\" (UID: \"5432ffce-4da9-4a8a-9738-4e9dd0ee9a6a\") " pod="openstack-operators/openstack-operator-index-bpd2f" Feb 03 12:24:35 crc kubenswrapper[4820]: I0203 12:24:35.954629 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db6cq\" (UniqueName: \"kubernetes.io/projected/5432ffce-4da9-4a8a-9738-4e9dd0ee9a6a-kube-api-access-db6cq\") pod \"openstack-operator-index-bpd2f\" (UID: \"5432ffce-4da9-4a8a-9738-4e9dd0ee9a6a\") " pod="openstack-operators/openstack-operator-index-bpd2f" Feb 03 12:24:35 crc kubenswrapper[4820]: I0203 12:24:35.977408 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db6cq\" (UniqueName: \"kubernetes.io/projected/5432ffce-4da9-4a8a-9738-4e9dd0ee9a6a-kube-api-access-db6cq\") pod \"openstack-operator-index-bpd2f\" (UID: \"5432ffce-4da9-4a8a-9738-4e9dd0ee9a6a\") " pod="openstack-operators/openstack-operator-index-bpd2f" Feb 03 12:24:36 crc kubenswrapper[4820]: I0203 12:24:36.047662 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bpd2f" Feb 03 12:24:36 crc kubenswrapper[4820]: I0203 12:24:36.341280 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-48tvq" Feb 03 12:24:37 crc kubenswrapper[4820]: I0203 12:24:37.299277 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bpd2f"] Feb 03 12:24:37 crc kubenswrapper[4820]: W0203 12:24:37.308075 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5432ffce_4da9_4a8a_9738_4e9dd0ee9a6a.slice/crio-65181870d06db0d5baac1d09868834226eeda59ec2b28b37841b31eb8fc2323b WatchSource:0}: Error finding container 65181870d06db0d5baac1d09868834226eeda59ec2b28b37841b31eb8fc2323b: Status 404 returned error can't find the container with id 65181870d06db0d5baac1d09868834226eeda59ec2b28b37841b31eb8fc2323b Feb 03 12:24:38 crc kubenswrapper[4820]: I0203 12:24:38.057165 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bpd2f" event={"ID":"5432ffce-4da9-4a8a-9738-4e9dd0ee9a6a","Type":"ContainerStarted","Data":"8344ea8bcee8ab6833d933e47b299653c8efeff595ac5018a42d2dab8d0026bd"} Feb 03 12:24:38 crc kubenswrapper[4820]: I0203 12:24:38.057524 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bpd2f" event={"ID":"5432ffce-4da9-4a8a-9738-4e9dd0ee9a6a","Type":"ContainerStarted","Data":"65181870d06db0d5baac1d09868834226eeda59ec2b28b37841b31eb8fc2323b"} Feb 03 12:24:38 crc kubenswrapper[4820]: I0203 12:24:38.060915 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wm9x4" event={"ID":"ca81d0d7-9e45-4d78-a14e-0296c34a17ef","Type":"ContainerStarted","Data":"e5952524f9594d89944413b2361571ebf8028853ea2c09b2fa5881cb05aa4560"} Feb 03 12:24:38 crc kubenswrapper[4820]: I0203 12:24:38.061175 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-wm9x4" podUID="ca81d0d7-9e45-4d78-a14e-0296c34a17ef" containerName="registry-server" containerID="cri-o://e5952524f9594d89944413b2361571ebf8028853ea2c09b2fa5881cb05aa4560" gracePeriod=2 Feb 03 12:24:38 crc kubenswrapper[4820]: I0203 12:24:38.086345 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bpd2f" podStartSLOduration=3.017062944 podStartE2EDuration="3.086321746s" podCreationTimestamp="2026-02-03 12:24:35 +0000 UTC" firstStartedPulling="2026-02-03 12:24:37.312298961 +0000 UTC m=+1194.835374825" lastFinishedPulling="2026-02-03 12:24:37.381557763 +0000 UTC m=+1194.904633627" observedRunningTime="2026-02-03 12:24:38.078526695 +0000 UTC m=+1195.601602589" watchObservedRunningTime="2026-02-03 12:24:38.086321746 +0000 UTC m=+1195.609397610" Feb 03 12:24:38 crc kubenswrapper[4820]: I0203 12:24:38.460201 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wm9x4" Feb 03 12:24:38 crc kubenswrapper[4820]: I0203 12:24:38.595099 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtxdn\" (UniqueName: \"kubernetes.io/projected/ca81d0d7-9e45-4d78-a14e-0296c34a17ef-kube-api-access-dtxdn\") pod \"ca81d0d7-9e45-4d78-a14e-0296c34a17ef\" (UID: \"ca81d0d7-9e45-4d78-a14e-0296c34a17ef\") " Feb 03 12:24:38 crc kubenswrapper[4820]: I0203 12:24:38.602749 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca81d0d7-9e45-4d78-a14e-0296c34a17ef-kube-api-access-dtxdn" (OuterVolumeSpecName: "kube-api-access-dtxdn") pod "ca81d0d7-9e45-4d78-a14e-0296c34a17ef" (UID: "ca81d0d7-9e45-4d78-a14e-0296c34a17ef"). InnerVolumeSpecName "kube-api-access-dtxdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:38 crc kubenswrapper[4820]: I0203 12:24:38.696973 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtxdn\" (UniqueName: \"kubernetes.io/projected/ca81d0d7-9e45-4d78-a14e-0296c34a17ef-kube-api-access-dtxdn\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:39 crc kubenswrapper[4820]: I0203 12:24:39.069296 4820 generic.go:334] "Generic (PLEG): container finished" podID="ca81d0d7-9e45-4d78-a14e-0296c34a17ef" containerID="e5952524f9594d89944413b2361571ebf8028853ea2c09b2fa5881cb05aa4560" exitCode=0 Feb 03 12:24:39 crc kubenswrapper[4820]: I0203 12:24:39.069348 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wm9x4" event={"ID":"ca81d0d7-9e45-4d78-a14e-0296c34a17ef","Type":"ContainerDied","Data":"e5952524f9594d89944413b2361571ebf8028853ea2c09b2fa5881cb05aa4560"} Feb 03 12:24:39 crc kubenswrapper[4820]: I0203 12:24:39.069362 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wm9x4" Feb 03 12:24:39 crc kubenswrapper[4820]: I0203 12:24:39.069724 4820 scope.go:117] "RemoveContainer" containerID="e5952524f9594d89944413b2361571ebf8028853ea2c09b2fa5881cb05aa4560" Feb 03 12:24:39 crc kubenswrapper[4820]: I0203 12:24:39.069705 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wm9x4" event={"ID":"ca81d0d7-9e45-4d78-a14e-0296c34a17ef","Type":"ContainerDied","Data":"bbde1174d10f990a1dd2da125585fd467a39c165166166e87154d0f29893c24d"} Feb 03 12:24:39 crc kubenswrapper[4820]: I0203 12:24:39.096175 4820 scope.go:117] "RemoveContainer" containerID="e5952524f9594d89944413b2361571ebf8028853ea2c09b2fa5881cb05aa4560" Feb 03 12:24:39 crc kubenswrapper[4820]: E0203 12:24:39.096645 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5952524f9594d89944413b2361571ebf8028853ea2c09b2fa5881cb05aa4560\": container with ID starting with e5952524f9594d89944413b2361571ebf8028853ea2c09b2fa5881cb05aa4560 not found: ID does not exist" containerID="e5952524f9594d89944413b2361571ebf8028853ea2c09b2fa5881cb05aa4560" Feb 03 12:24:39 crc kubenswrapper[4820]: I0203 12:24:39.096775 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5952524f9594d89944413b2361571ebf8028853ea2c09b2fa5881cb05aa4560"} err="failed to get container status \"e5952524f9594d89944413b2361571ebf8028853ea2c09b2fa5881cb05aa4560\": rpc error: code = NotFound desc = could not find container \"e5952524f9594d89944413b2361571ebf8028853ea2c09b2fa5881cb05aa4560\": container with ID starting with e5952524f9594d89944413b2361571ebf8028853ea2c09b2fa5881cb05aa4560 not found: ID does not exist" Feb 03 12:24:39 crc kubenswrapper[4820]: I0203 12:24:39.104345 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-wm9x4"] Feb 03 12:24:39 crc kubenswrapper[4820]: I0203 12:24:39.109345 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-wm9x4"] Feb 03 12:24:39 crc kubenswrapper[4820]: I0203 12:24:39.153569 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca81d0d7-9e45-4d78-a14e-0296c34a17ef" path="/var/lib/kubelet/pods/ca81d0d7-9e45-4d78-a14e-0296c34a17ef/volumes" Feb 03 12:24:46 crc kubenswrapper[4820]: I0203 12:24:46.048799 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-bpd2f" Feb 03 12:24:46 crc kubenswrapper[4820]: I0203 12:24:46.049433 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-bpd2f" Feb 03 12:24:46 crc kubenswrapper[4820]: I0203 12:24:46.081093 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-bpd2f" Feb 03 12:24:46 crc kubenswrapper[4820]: I0203 12:24:46.150745 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-bpd2f" Feb 03 12:24:53 crc kubenswrapper[4820]: I0203 12:24:53.848757 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl"] Feb 03 12:24:53 crc kubenswrapper[4820]: E0203 12:24:53.849666 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca81d0d7-9e45-4d78-a14e-0296c34a17ef" containerName="registry-server" Feb 03 12:24:53 crc kubenswrapper[4820]: I0203 12:24:53.849683 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca81d0d7-9e45-4d78-a14e-0296c34a17ef" containerName="registry-server" Feb 03 12:24:53 crc kubenswrapper[4820]: I0203 12:24:53.849815 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca81d0d7-9e45-4d78-a14e-0296c34a17ef" containerName="registry-server" Feb 03 12:24:53 crc kubenswrapper[4820]: I0203 12:24:53.850969 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" Feb 03 12:24:53 crc kubenswrapper[4820]: I0203 12:24:53.856553 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-7qq8k" Feb 03 12:24:53 crc kubenswrapper[4820]: I0203 12:24:53.860711 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl"] Feb 03 12:24:53 crc kubenswrapper[4820]: I0203 12:24:53.911529 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-util\") pod \"06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl\" (UID: \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\") " pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" Feb 03 12:24:53 crc kubenswrapper[4820]: I0203 12:24:53.911579 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-bundle\") pod \"06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl\" (UID: \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\") " pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" Feb 03 12:24:53 crc kubenswrapper[4820]: I0203 12:24:53.911736 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-499kz\" (UniqueName: \"kubernetes.io/projected/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-kube-api-access-499kz\") pod \"06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl\" (UID: \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\") " pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" Feb 03 12:24:54 crc kubenswrapper[4820]: I0203 12:24:54.012584 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-util\") pod \"06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl\" (UID: \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\") " pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" Feb 03 12:24:54 crc kubenswrapper[4820]: I0203 12:24:54.012944 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-bundle\") pod \"06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl\" (UID: \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\") " pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" Feb 03 12:24:54 crc kubenswrapper[4820]: I0203 12:24:54.013023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-499kz\" (UniqueName: \"kubernetes.io/projected/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-kube-api-access-499kz\") pod \"06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl\" (UID: \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\") " pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" Feb 03 12:24:54 crc kubenswrapper[4820]: I0203 12:24:54.013248 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-util\") pod \"06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl\" (UID: \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\") " pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" Feb 03 12:24:54 crc kubenswrapper[4820]: I0203 12:24:54.013295 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-bundle\") pod \"06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl\" (UID: \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\") " pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" Feb 03 12:24:54 crc kubenswrapper[4820]: I0203 12:24:54.034559 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-499kz\" (UniqueName: \"kubernetes.io/projected/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-kube-api-access-499kz\") pod \"06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl\" (UID: \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\") " pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" Feb 03 12:24:54 crc kubenswrapper[4820]: I0203 12:24:54.184144 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" Feb 03 12:24:55 crc kubenswrapper[4820]: I0203 12:24:55.179654 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl"] Feb 03 12:24:55 crc kubenswrapper[4820]: I0203 12:24:55.283632 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" event={"ID":"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574","Type":"ContainerStarted","Data":"053598e7977eb4aea3de688ef0496a15ae51164e11d5a0257157139dd950ab3a"} Feb 03 12:24:56 crc kubenswrapper[4820]: I0203 12:24:56.293726 4820 generic.go:334] "Generic (PLEG): container finished" podID="7a0d0284-7ac0-4e09-ba63-1fa33dbbb574" containerID="523cad5ae8815e98164831e5094cc5cb9006b1bb2490907ddeeeede3af2ef802" exitCode=0 Feb 03 12:24:56 crc kubenswrapper[4820]: I0203 12:24:56.293829 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" event={"ID":"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574","Type":"ContainerDied","Data":"523cad5ae8815e98164831e5094cc5cb9006b1bb2490907ddeeeede3af2ef802"} Feb 03 12:24:57 crc kubenswrapper[4820]: I0203 12:24:57.304175 4820 generic.go:334] "Generic (PLEG): container finished" podID="7a0d0284-7ac0-4e09-ba63-1fa33dbbb574" containerID="8f82658a6fb83d44f14d7a0eef9f01d036291e8558dd051779801db2c1c14bac" exitCode=0 Feb 03 12:24:57 crc kubenswrapper[4820]: I0203 12:24:57.304267 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" event={"ID":"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574","Type":"ContainerDied","Data":"8f82658a6fb83d44f14d7a0eef9f01d036291e8558dd051779801db2c1c14bac"} Feb 03 12:24:58 crc kubenswrapper[4820]: I0203 12:24:58.334020 4820 generic.go:334] "Generic (PLEG): container finished" podID="7a0d0284-7ac0-4e09-ba63-1fa33dbbb574" containerID="1caa3a06d16c4eac851253d7a7d225b678ecf42aab020162634e99c9b10f57f8" exitCode=0 Feb 03 12:24:58 crc kubenswrapper[4820]: I0203 12:24:58.334090 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" event={"ID":"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574","Type":"ContainerDied","Data":"1caa3a06d16c4eac851253d7a7d225b678ecf42aab020162634e99c9b10f57f8"} Feb 03 12:24:59 crc kubenswrapper[4820]: I0203 12:24:59.762897 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" Feb 03 12:24:59 crc kubenswrapper[4820]: I0203 12:24:59.935612 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-bundle\") pod \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\" (UID: \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\") " Feb 03 12:24:59 crc kubenswrapper[4820]: I0203 12:24:59.936138 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-util\") pod \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\" (UID: \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\") " Feb 03 12:24:59 crc kubenswrapper[4820]: I0203 12:24:59.936265 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-499kz\" (UniqueName: \"kubernetes.io/projected/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-kube-api-access-499kz\") pod \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\" (UID: \"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574\") " Feb 03 12:24:59 crc kubenswrapper[4820]: I0203 12:24:59.936973 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-bundle" (OuterVolumeSpecName: "bundle") pod "7a0d0284-7ac0-4e09-ba63-1fa33dbbb574" (UID: "7a0d0284-7ac0-4e09-ba63-1fa33dbbb574"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:24:59 crc kubenswrapper[4820]: I0203 12:24:59.942248 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-kube-api-access-499kz" (OuterVolumeSpecName: "kube-api-access-499kz") pod "7a0d0284-7ac0-4e09-ba63-1fa33dbbb574" (UID: "7a0d0284-7ac0-4e09-ba63-1fa33dbbb574"). InnerVolumeSpecName "kube-api-access-499kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:59 crc kubenswrapper[4820]: I0203 12:24:59.951841 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-util" (OuterVolumeSpecName: "util") pod "7a0d0284-7ac0-4e09-ba63-1fa33dbbb574" (UID: "7a0d0284-7ac0-4e09-ba63-1fa33dbbb574"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:25:00 crc kubenswrapper[4820]: I0203 12:25:00.037557 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:00 crc kubenswrapper[4820]: I0203 12:25:00.037836 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-util\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:00 crc kubenswrapper[4820]: I0203 12:25:00.037848 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-499kz\" (UniqueName: \"kubernetes.io/projected/7a0d0284-7ac0-4e09-ba63-1fa33dbbb574-kube-api-access-499kz\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:00 crc kubenswrapper[4820]: I0203 12:25:00.348449 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" event={"ID":"7a0d0284-7ac0-4e09-ba63-1fa33dbbb574","Type":"ContainerDied","Data":"053598e7977eb4aea3de688ef0496a15ae51164e11d5a0257157139dd950ab3a"} Feb 03 12:25:00 crc kubenswrapper[4820]: I0203 12:25:00.348494 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="053598e7977eb4aea3de688ef0496a15ae51164e11d5a0257157139dd950ab3a" Feb 03 12:25:00 crc kubenswrapper[4820]: I0203 12:25:00.348562 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl" Feb 03 12:25:01 crc kubenswrapper[4820]: I0203 12:25:01.365717 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:25:01 crc kubenswrapper[4820]: I0203 12:25:01.365780 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:25:01 crc kubenswrapper[4820]: I0203 12:25:01.365831 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:25:01 crc kubenswrapper[4820]: I0203 12:25:01.366556 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18df95791bb9a7f437d7d4ad2b5b03a9b5d2686ac3fa57d763f146b5d1397b25"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:25:01 crc kubenswrapper[4820]: I0203 12:25:01.366622 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://18df95791bb9a7f437d7d4ad2b5b03a9b5d2686ac3fa57d763f146b5d1397b25" gracePeriod=600 Feb 03 12:25:02 crc kubenswrapper[4820]: I0203 12:25:02.366134 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="18df95791bb9a7f437d7d4ad2b5b03a9b5d2686ac3fa57d763f146b5d1397b25" exitCode=0 Feb 03 12:25:02 crc kubenswrapper[4820]: I0203 12:25:02.366207 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"18df95791bb9a7f437d7d4ad2b5b03a9b5d2686ac3fa57d763f146b5d1397b25"} Feb 03 12:25:02 crc kubenswrapper[4820]: I0203 12:25:02.366767 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"f5b6fb38e3a772864bd8a30bd0acd2c8340ca496b3ae218013d45718e5286b56"} Feb 03 12:25:02 crc kubenswrapper[4820]: I0203 12:25:02.366804 4820 scope.go:117] "RemoveContainer" containerID="f961bb48cccbb18f37545a37a50be08f55d027f113f203a762d6ed87bcedcb42" Feb 03 12:25:06 crc kubenswrapper[4820]: I0203 12:25:06.291654 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-8c5c9674b-tdfgs"] Feb 03 12:25:06 crc kubenswrapper[4820]: E0203 12:25:06.292594 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0d0284-7ac0-4e09-ba63-1fa33dbbb574" containerName="pull" Feb 03 12:25:06 crc kubenswrapper[4820]: I0203 12:25:06.292614 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0d0284-7ac0-4e09-ba63-1fa33dbbb574" containerName="pull" Feb 03 12:25:06 crc kubenswrapper[4820]: E0203 12:25:06.292640 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0d0284-7ac0-4e09-ba63-1fa33dbbb574" containerName="extract" Feb 03 12:25:06 crc kubenswrapper[4820]: I0203 12:25:06.292647 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0d0284-7ac0-4e09-ba63-1fa33dbbb574" containerName="extract" Feb 03 12:25:06 crc kubenswrapper[4820]: E0203 12:25:06.292662 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0d0284-7ac0-4e09-ba63-1fa33dbbb574" containerName="util" Feb 03 12:25:06 crc kubenswrapper[4820]: I0203 12:25:06.292669 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0d0284-7ac0-4e09-ba63-1fa33dbbb574" containerName="util" Feb 03 12:25:06 crc kubenswrapper[4820]: I0203 12:25:06.292835 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0d0284-7ac0-4e09-ba63-1fa33dbbb574" containerName="extract" Feb 03 12:25:06 crc kubenswrapper[4820]: I0203 12:25:06.293562 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-8c5c9674b-tdfgs" Feb 03 12:25:06 crc kubenswrapper[4820]: I0203 12:25:06.297783 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-mcpk7" Feb 03 12:25:06 crc kubenswrapper[4820]: I0203 12:25:06.345337 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-8c5c9674b-tdfgs"] Feb 03 12:25:06 crc kubenswrapper[4820]: I0203 12:25:06.407755 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjp2d\" (UniqueName: \"kubernetes.io/projected/a781fb7c-cb52-4076-aa3c-5792d8ab7e42-kube-api-access-bjp2d\") pod \"openstack-operator-controller-init-8c5c9674b-tdfgs\" (UID: \"a781fb7c-cb52-4076-aa3c-5792d8ab7e42\") " pod="openstack-operators/openstack-operator-controller-init-8c5c9674b-tdfgs" Feb 03 12:25:06 crc kubenswrapper[4820]: I0203 12:25:06.509619 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjp2d\" (UniqueName: \"kubernetes.io/projected/a781fb7c-cb52-4076-aa3c-5792d8ab7e42-kube-api-access-bjp2d\") pod \"openstack-operator-controller-init-8c5c9674b-tdfgs\" (UID: \"a781fb7c-cb52-4076-aa3c-5792d8ab7e42\") " pod="openstack-operators/openstack-operator-controller-init-8c5c9674b-tdfgs" Feb 03 12:25:06 crc kubenswrapper[4820]: I0203 12:25:06.626193 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjp2d\" (UniqueName: \"kubernetes.io/projected/a781fb7c-cb52-4076-aa3c-5792d8ab7e42-kube-api-access-bjp2d\") pod \"openstack-operator-controller-init-8c5c9674b-tdfgs\" (UID: \"a781fb7c-cb52-4076-aa3c-5792d8ab7e42\") " pod="openstack-operators/openstack-operator-controller-init-8c5c9674b-tdfgs" Feb 03 12:25:06 crc kubenswrapper[4820]: I0203 12:25:06.918494 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-8c5c9674b-tdfgs" Feb 03 12:25:07 crc kubenswrapper[4820]: I0203 12:25:07.476208 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-8c5c9674b-tdfgs"] Feb 03 12:25:07 crc kubenswrapper[4820]: I0203 12:25:07.501954 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:25:08 crc kubenswrapper[4820]: I0203 12:25:08.499759 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8c5c9674b-tdfgs" event={"ID":"a781fb7c-cb52-4076-aa3c-5792d8ab7e42","Type":"ContainerStarted","Data":"3c73f1ad1ddcab543a9a181ef78835d868e731f337c2fcd6e96657e9f7a9d946"} Feb 03 12:25:14 crc kubenswrapper[4820]: I0203 12:25:14.609748 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8c5c9674b-tdfgs" event={"ID":"a781fb7c-cb52-4076-aa3c-5792d8ab7e42","Type":"ContainerStarted","Data":"b7c9ab5e358d66e49d1945266f7fe6d22f536f9f7326396f543e467a05f43c22"} Feb 03 12:25:14 crc kubenswrapper[4820]: I0203 12:25:14.610054 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-8c5c9674b-tdfgs" Feb 03 12:25:26 crc kubenswrapper[4820]: I0203 12:25:26.922352 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-8c5c9674b-tdfgs" Feb 03 12:25:26 crc kubenswrapper[4820]: I0203 12:25:26.964784 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-8c5c9674b-tdfgs" podStartSLOduration=14.359036285 podStartE2EDuration="20.964756369s" podCreationTimestamp="2026-02-03 12:25:06 +0000 UTC" firstStartedPulling="2026-02-03 12:25:07.501584353 +0000 UTC m=+1225.024660217" lastFinishedPulling="2026-02-03 12:25:14.107304447 +0000 UTC m=+1231.630380301" observedRunningTime="2026-02-03 12:25:14.640402298 +0000 UTC m=+1232.163478172" watchObservedRunningTime="2026-02-03 12:25:26.964756369 +0000 UTC m=+1244.487832243" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.605364 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-qnh2k"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.606942 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-qnh2k" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.609185 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-8tfvj" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.615989 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.617100 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.621227 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-tpppq" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.635461 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.636901 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.641378 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-cxcpl" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.647571 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-qnh2k"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.657664 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.677076 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.684615 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.685480 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.690369 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-b57fb" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.699465 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdk6q\" (UniqueName: \"kubernetes.io/projected/cde1eaee-12a0-47f7-b88a-b1b97d0ed74b-kube-api-access-mdk6q\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-qnh2k\" (UID: \"cde1eaee-12a0-47f7-b88a-b1b97d0ed74b\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-qnh2k" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.734952 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.747663 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.749019 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.751587 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2sbvc" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.759998 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-t5mj4"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.761241 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-t5mj4" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.767341 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.771130 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-hmfgz" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.797494 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-t5mj4"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.804262 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdk6q\" (UniqueName: \"kubernetes.io/projected/cde1eaee-12a0-47f7-b88a-b1b97d0ed74b-kube-api-access-mdk6q\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-qnh2k\" (UID: \"cde1eaee-12a0-47f7-b88a-b1b97d0ed74b\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-qnh2k" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.804338 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xmhg\" (UniqueName: \"kubernetes.io/projected/51c967b2-8f1a-4d0d-a3f9-745e72863b84-kube-api-access-2xmhg\") pod \"designate-operator-controller-manager-6d9697b7f4-wsb7r\" (UID: \"51c967b2-8f1a-4d0d-a3f9-745e72863b84\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.804419 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6q4f\" (UniqueName: \"kubernetes.io/projected/88eb8fcd-4721-45c2-bb00-23b1dc962283-kube-api-access-q6q4f\") pod \"glance-operator-controller-manager-8886f4c47-5dmwb\" (UID: \"88eb8fcd-4721-45c2-bb00-23b1dc962283\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.804481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rg7s\" (UniqueName: \"kubernetes.io/projected/4d0ea57e-5eb3-4624-a299-b9a7ad6f2bb0-kube-api-access-5rg7s\") pod \"cinder-operator-controller-manager-8d874c8fc-z8jk7\" (UID: \"4d0ea57e-5eb3-4624-a299-b9a7ad6f2bb0\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.819951 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-22gr9"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.825910 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.829768 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ns5fc" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.842932 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.867020 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdk6q\" (UniqueName: \"kubernetes.io/projected/cde1eaee-12a0-47f7-b88a-b1b97d0ed74b-kube-api-access-mdk6q\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-qnh2k\" (UID: \"cde1eaee-12a0-47f7-b88a-b1b97d0ed74b\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-qnh2k" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.890159 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-22gr9"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.915860 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trhpx\" (UniqueName: \"kubernetes.io/projected/101ca31b-ff08-4a49-9cc1-f48fd8679116-kube-api-access-trhpx\") pod \"horizon-operator-controller-manager-5fb775575f-t5mj4\" (UID: \"101ca31b-ff08-4a49-9cc1-f48fd8679116\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-t5mj4" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.916051 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xmhg\" (UniqueName: \"kubernetes.io/projected/51c967b2-8f1a-4d0d-a3f9-745e72863b84-kube-api-access-2xmhg\") pod \"designate-operator-controller-manager-6d9697b7f4-wsb7r\" (UID: \"51c967b2-8f1a-4d0d-a3f9-745e72863b84\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.916202 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6q4f\" (UniqueName: \"kubernetes.io/projected/88eb8fcd-4721-45c2-bb00-23b1dc962283-kube-api-access-q6q4f\") pod \"glance-operator-controller-manager-8886f4c47-5dmwb\" (UID: \"88eb8fcd-4721-45c2-bb00-23b1dc962283\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.916257 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv2tm\" (UniqueName: \"kubernetes.io/projected/7f5efd7c-09f4-42b0-ba17-7a7dc609d914-kube-api-access-gv2tm\") pod \"heat-operator-controller-manager-69d6db494d-6fw2d\" (UID: \"7f5efd7c-09f4-42b0-ba17-7a7dc609d914\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.916298 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rg7s\" (UniqueName: \"kubernetes.io/projected/4d0ea57e-5eb3-4624-a299-b9a7ad6f2bb0-kube-api-access-5rg7s\") pod \"cinder-operator-controller-manager-8d874c8fc-z8jk7\" (UID: \"4d0ea57e-5eb3-4624-a299-b9a7ad6f2bb0\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.918988 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.920119 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.941953 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-4nhbz" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.942983 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.944788 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.956454 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-qnh2k" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.957359 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j"] Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.961830 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xbk7x" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.983402 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xmhg\" (UniqueName: \"kubernetes.io/projected/51c967b2-8f1a-4d0d-a3f9-745e72863b84-kube-api-access-2xmhg\") pod \"designate-operator-controller-manager-6d9697b7f4-wsb7r\" (UID: \"51c967b2-8f1a-4d0d-a3f9-745e72863b84\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r" Feb 03 12:25:58 crc kubenswrapper[4820]: I0203 12:25:58.991902 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.003629 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rg7s\" (UniqueName: \"kubernetes.io/projected/4d0ea57e-5eb3-4624-a299-b9a7ad6f2bb0-kube-api-access-5rg7s\") pod \"cinder-operator-controller-manager-8d874c8fc-z8jk7\" (UID: \"4d0ea57e-5eb3-4624-a299-b9a7ad6f2bb0\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.007796 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.008861 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.019330 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-xwvpc" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.023873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6q4f\" (UniqueName: \"kubernetes.io/projected/88eb8fcd-4721-45c2-bb00-23b1dc962283-kube-api-access-q6q4f\") pod \"glance-operator-controller-manager-8886f4c47-5dmwb\" (UID: \"88eb8fcd-4721-45c2-bb00-23b1dc962283\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.025376 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv2tm\" (UniqueName: \"kubernetes.io/projected/7f5efd7c-09f4-42b0-ba17-7a7dc609d914-kube-api-access-gv2tm\") pod \"heat-operator-controller-manager-69d6db494d-6fw2d\" (UID: \"7f5efd7c-09f4-42b0-ba17-7a7dc609d914\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.025461 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert\") pod \"infra-operator-controller-manager-79955696d6-22gr9\" (UID: \"7ad36bba-9140-4660-b4ed-e873264c9e22\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.025970 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trhpx\" (UniqueName: \"kubernetes.io/projected/101ca31b-ff08-4a49-9cc1-f48fd8679116-kube-api-access-trhpx\") pod \"horizon-operator-controller-manager-5fb775575f-t5mj4\" (UID: \"101ca31b-ff08-4a49-9cc1-f48fd8679116\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-t5mj4" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.026059 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rpfs\" (UniqueName: \"kubernetes.io/projected/7ad36bba-9140-4660-b4ed-e873264c9e22-kube-api-access-2rpfs\") pod \"infra-operator-controller-manager-79955696d6-22gr9\" (UID: \"7ad36bba-9140-4660-b4ed-e873264c9e22\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.026111 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh254\" (UniqueName: \"kubernetes.io/projected/29dd9257-532e-48a4-9500-adfc5584ebe0-kube-api-access-xh254\") pod \"ironic-operator-controller-manager-5f4b8bd54d-xkj2j\" (UID: \"29dd9257-532e-48a4-9500-adfc5584ebe0\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.043697 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.067322 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.068490 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.082678 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-79bm9" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.083642 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.085007 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.093253 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv2tm\" (UniqueName: \"kubernetes.io/projected/7f5efd7c-09f4-42b0-ba17-7a7dc609d914-kube-api-access-gv2tm\") pod \"heat-operator-controller-manager-69d6db494d-6fw2d\" (UID: \"7f5efd7c-09f4-42b0-ba17-7a7dc609d914\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.094315 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-lzr4r" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.106208 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trhpx\" (UniqueName: \"kubernetes.io/projected/101ca31b-ff08-4a49-9cc1-f48fd8679116-kube-api-access-trhpx\") pod \"horizon-operator-controller-manager-5fb775575f-t5mj4\" (UID: \"101ca31b-ff08-4a49-9cc1-f48fd8679116\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-t5mj4" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.120802 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.133917 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vdbr\" (UniqueName: \"kubernetes.io/projected/614c5412-875d-40b1-ad5f-445a941285af-kube-api-access-4vdbr\") pod \"keystone-operator-controller-manager-84f48565d4-9rprq\" (UID: \"614c5412-875d-40b1-ad5f-445a941285af\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.133989 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rpfs\" (UniqueName: \"kubernetes.io/projected/7ad36bba-9140-4660-b4ed-e873264c9e22-kube-api-access-2rpfs\") pod \"infra-operator-controller-manager-79955696d6-22gr9\" (UID: \"7ad36bba-9140-4660-b4ed-e873264c9e22\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.134039 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh254\" (UniqueName: \"kubernetes.io/projected/29dd9257-532e-48a4-9500-adfc5584ebe0-kube-api-access-xh254\") pod \"ironic-operator-controller-manager-5f4b8bd54d-xkj2j\" (UID: \"29dd9257-532e-48a4-9500-adfc5584ebe0\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.134108 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert\") pod \"infra-operator-controller-manager-79955696d6-22gr9\" (UID: \"7ad36bba-9140-4660-b4ed-e873264c9e22\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.134142 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt9d9\" (UniqueName: \"kubernetes.io/projected/4ebad58b-3e3b-4bcb-9a80-dedd97e940d0-kube-api-access-gt9d9\") pod \"manila-operator-controller-manager-7dd968899f-rdbrk\" (UID: \"4ebad58b-3e3b-4bcb-9a80-dedd97e940d0\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk" Feb 03 12:25:59 crc kubenswrapper[4820]: E0203 12:25:59.135384 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 12:25:59 crc kubenswrapper[4820]: E0203 12:25:59.135449 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert podName:7ad36bba-9140-4660-b4ed-e873264c9e22 nodeName:}" failed. No retries permitted until 2026-02-03 12:25:59.635423512 +0000 UTC m=+1277.158499376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert") pod "infra-operator-controller-manager-79955696d6-22gr9" (UID: "7ad36bba-9140-4660-b4ed-e873264c9e22") : secret "infra-operator-webhook-server-cert" not found Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.136464 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-t5mj4" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.192765 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rpfs\" (UniqueName: \"kubernetes.io/projected/7ad36bba-9140-4660-b4ed-e873264c9e22-kube-api-access-2rpfs\") pod \"infra-operator-controller-manager-79955696d6-22gr9\" (UID: \"7ad36bba-9140-4660-b4ed-e873264c9e22\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.212437 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.212500 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.213556 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.217394 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rv5hb" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.222381 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-brrn4"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.223608 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-brrn4" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.228928 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-nzhhl" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.239618 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nd5l\" (UniqueName: \"kubernetes.io/projected/40fd2238-8148-4aa3-8f4e-54ffc1de0805-kube-api-access-5nd5l\") pod \"mariadb-operator-controller-manager-67bf948998-qnmp6\" (UID: \"40fd2238-8148-4aa3-8f4e-54ffc1de0805\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.239759 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blsl9\" (UniqueName: \"kubernetes.io/projected/81450158-204d-45f5-a1bc-de63e889445d-kube-api-access-blsl9\") pod \"neutron-operator-controller-manager-585dbc889-4tmqm\" (UID: \"81450158-204d-45f5-a1bc-de63e889445d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.239847 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt9d9\" (UniqueName: \"kubernetes.io/projected/4ebad58b-3e3b-4bcb-9a80-dedd97e940d0-kube-api-access-gt9d9\") pod \"manila-operator-controller-manager-7dd968899f-rdbrk\" (UID: \"4ebad58b-3e3b-4bcb-9a80-dedd97e940d0\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.240124 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vdbr\" (UniqueName: \"kubernetes.io/projected/614c5412-875d-40b1-ad5f-445a941285af-kube-api-access-4vdbr\") pod \"keystone-operator-controller-manager-84f48565d4-9rprq\" (UID: \"614c5412-875d-40b1-ad5f-445a941285af\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.241202 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh254\" (UniqueName: \"kubernetes.io/projected/29dd9257-532e-48a4-9500-adfc5584ebe0-kube-api-access-xh254\") pod \"ironic-operator-controller-manager-5f4b8bd54d-xkj2j\" (UID: \"29dd9257-532e-48a4-9500-adfc5584ebe0\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.249742 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.255467 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.263816 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.268948 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.279193 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt9d9\" (UniqueName: \"kubernetes.io/projected/4ebad58b-3e3b-4bcb-9a80-dedd97e940d0-kube-api-access-gt9d9\") pod \"manila-operator-controller-manager-7dd968899f-rdbrk\" (UID: \"4ebad58b-3e3b-4bcb-9a80-dedd97e940d0\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.286381 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vdbr\" (UniqueName: \"kubernetes.io/projected/614c5412-875d-40b1-ad5f-445a941285af-kube-api-access-4vdbr\") pod \"keystone-operator-controller-manager-84f48565d4-9rprq\" (UID: \"614c5412-875d-40b1-ad5f-445a941285af\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.303986 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.305228 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.308069 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rsxvh" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.308154 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.308346 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.322620 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-brrn4"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.342346 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzbq9\" (UniqueName: \"kubernetes.io/projected/56b6c45e-8879-4a30-8b7c-d6c7df8ac6ae-kube-api-access-qzbq9\") pod \"octavia-operator-controller-manager-6687f8d877-brrn4\" (UID: \"56b6c45e-8879-4a30-8b7c-d6c7df8ac6ae\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-brrn4" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.342487 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nd5l\" (UniqueName: \"kubernetes.io/projected/40fd2238-8148-4aa3-8f4e-54ffc1de0805-kube-api-access-5nd5l\") pod \"mariadb-operator-controller-manager-67bf948998-qnmp6\" (UID: \"40fd2238-8148-4aa3-8f4e-54ffc1de0805\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.342566 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srxd6\" (UniqueName: \"kubernetes.io/projected/7a515408-dc44-4fba-bbe9-8b5f36fbc1d0-kube-api-access-srxd6\") pod \"nova-operator-controller-manager-55bff696bd-4fgnl\" (UID: \"7a515408-dc44-4fba-bbe9-8b5f36fbc1d0\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.342627 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blsl9\" (UniqueName: \"kubernetes.io/projected/81450158-204d-45f5-a1bc-de63e889445d-kube-api-access-blsl9\") pod \"neutron-operator-controller-manager-585dbc889-4tmqm\" (UID: \"81450158-204d-45f5-a1bc-de63e889445d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.364291 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.365813 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.377097 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.379225 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blsl9\" (UniqueName: \"kubernetes.io/projected/81450158-204d-45f5-a1bc-de63e889445d-kube-api-access-blsl9\") pod \"neutron-operator-controller-manager-585dbc889-4tmqm\" (UID: \"81450158-204d-45f5-a1bc-de63e889445d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.383092 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2thbg" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.389988 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.391547 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nd5l\" (UniqueName: \"kubernetes.io/projected/40fd2238-8148-4aa3-8f4e-54ffc1de0805-kube-api-access-5nd5l\") pod \"mariadb-operator-controller-manager-67bf948998-qnmp6\" (UID: \"40fd2238-8148-4aa3-8f4e-54ffc1de0805\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.393050 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.397282 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-8j85v" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.412078 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.418198 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.442359 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.444174 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4nlx\" (UniqueName: \"kubernetes.io/projected/591b67aa-03c7-4cf7-8918-17e2f7a428b0-kube-api-access-d4nlx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9\" (UID: \"591b67aa-03c7-4cf7-8918-17e2f7a428b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.444239 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzbq9\" (UniqueName: \"kubernetes.io/projected/56b6c45e-8879-4a30-8b7c-d6c7df8ac6ae-kube-api-access-qzbq9\") pod \"octavia-operator-controller-manager-6687f8d877-brrn4\" (UID: \"56b6c45e-8879-4a30-8b7c-d6c7df8ac6ae\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-brrn4" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.444338 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srxd6\" (UniqueName: \"kubernetes.io/projected/7a515408-dc44-4fba-bbe9-8b5f36fbc1d0-kube-api-access-srxd6\") pod \"nova-operator-controller-manager-55bff696bd-4fgnl\" (UID: \"7a515408-dc44-4fba-bbe9-8b5f36fbc1d0\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.444388 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9\" (UID: \"591b67aa-03c7-4cf7-8918-17e2f7a428b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.452237 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.477099 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-dr7hd"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.484455 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-dr7hd" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.487042 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-sbfn8" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.492255 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.501437 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.533390 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srxd6\" (UniqueName: \"kubernetes.io/projected/7a515408-dc44-4fba-bbe9-8b5f36fbc1d0-kube-api-access-srxd6\") pod \"nova-operator-controller-manager-55bff696bd-4fgnl\" (UID: \"7a515408-dc44-4fba-bbe9-8b5f36fbc1d0\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.534355 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzbq9\" (UniqueName: \"kubernetes.io/projected/56b6c45e-8879-4a30-8b7c-d6c7df8ac6ae-kube-api-access-qzbq9\") pod \"octavia-operator-controller-manager-6687f8d877-brrn4\" (UID: \"56b6c45e-8879-4a30-8b7c-d6c7df8ac6ae\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-brrn4" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.553472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9\" (UID: \"591b67aa-03c7-4cf7-8918-17e2f7a428b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.553653 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4nlx\" (UniqueName: \"kubernetes.io/projected/591b67aa-03c7-4cf7-8918-17e2f7a428b0-kube-api-access-d4nlx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9\" (UID: \"591b67aa-03c7-4cf7-8918-17e2f7a428b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.553701 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649jv\" (UniqueName: \"kubernetes.io/projected/b12dfc88-bdbd-4874-b397-9273a669e57f-kube-api-access-649jv\") pod \"ovn-operator-controller-manager-788c46999f-5lxrd\" (UID: \"b12dfc88-bdbd-4874-b397-9273a669e57f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.553754 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62hv9\" (UniqueName: \"kubernetes.io/projected/851ed64f-f147-45d0-a33b-eea29903ec0a-kube-api-access-62hv9\") pod \"placement-operator-controller-manager-5b964cf4cd-x567r\" (UID: \"851ed64f-f147-45d0-a33b-eea29903ec0a\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" Feb 03 12:25:59 crc kubenswrapper[4820]: E0203 12:25:59.554807 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:25:59 crc kubenswrapper[4820]: E0203 12:25:59.555030 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert podName:591b67aa-03c7-4cf7-8918-17e2f7a428b0 nodeName:}" failed. No retries permitted until 2026-02-03 12:26:00.054868406 +0000 UTC m=+1277.577944360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" (UID: "591b67aa-03c7-4cf7-8918-17e2f7a428b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.557358 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.564933 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.613350 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-brrn4" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.617709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4nlx\" (UniqueName: \"kubernetes.io/projected/591b67aa-03c7-4cf7-8918-17e2f7a428b0-kube-api-access-d4nlx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9\" (UID: \"591b67aa-03c7-4cf7-8918-17e2f7a428b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.641315 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.669027 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert\") pod \"infra-operator-controller-manager-79955696d6-22gr9\" (UID: \"7ad36bba-9140-4660-b4ed-e873264c9e22\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.673197 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-649jv\" (UniqueName: \"kubernetes.io/projected/b12dfc88-bdbd-4874-b397-9273a669e57f-kube-api-access-649jv\") pod \"ovn-operator-controller-manager-788c46999f-5lxrd\" (UID: \"b12dfc88-bdbd-4874-b397-9273a669e57f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.673598 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62hv9\" (UniqueName: \"kubernetes.io/projected/851ed64f-f147-45d0-a33b-eea29903ec0a-kube-api-access-62hv9\") pod \"placement-operator-controller-manager-5b964cf4cd-x567r\" (UID: \"851ed64f-f147-45d0-a33b-eea29903ec0a\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.675302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcj2n\" (UniqueName: \"kubernetes.io/projected/21f3efdd-0c83-42cb-8b54-b0554534bfb7-kube-api-access-dcj2n\") pod \"swift-operator-controller-manager-68fc8c869-dr7hd\" (UID: \"21f3efdd-0c83-42cb-8b54-b0554534bfb7\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-dr7hd" Feb 03 12:25:59 crc kubenswrapper[4820]: E0203 12:25:59.669569 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 12:25:59 crc kubenswrapper[4820]: E0203 12:25:59.681674 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert podName:7ad36bba-9140-4660-b4ed-e873264c9e22 nodeName:}" failed. No retries permitted until 2026-02-03 12:26:00.68163288 +0000 UTC m=+1278.204708734 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert") pod "infra-operator-controller-manager-79955696d6-22gr9" (UID: "7ad36bba-9140-4660-b4ed-e873264c9e22") : secret "infra-operator-webhook-server-cert" not found Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.681996 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.689472 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jcqk4" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.716035 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-dr7hd"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.738261 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-649jv\" (UniqueName: \"kubernetes.io/projected/b12dfc88-bdbd-4874-b397-9273a669e57f-kube-api-access-649jv\") pod \"ovn-operator-controller-manager-788c46999f-5lxrd\" (UID: \"b12dfc88-bdbd-4874-b397-9273a669e57f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.764164 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62hv9\" (UniqueName: \"kubernetes.io/projected/851ed64f-f147-45d0-a33b-eea29903ec0a-kube-api-access-62hv9\") pod \"placement-operator-controller-manager-5b964cf4cd-x567r\" (UID: \"851ed64f-f147-45d0-a33b-eea29903ec0a\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.778650 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-849t9\" (UniqueName: \"kubernetes.io/projected/96838cc3-1b9b-41b3-b20e-476319c65436-kube-api-access-849t9\") pod \"telemetry-operator-controller-manager-64b5b76f97-xw4mq\" (UID: \"96838cc3-1b9b-41b3-b20e-476319c65436\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.778766 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcj2n\" (UniqueName: \"kubernetes.io/projected/21f3efdd-0c83-42cb-8b54-b0554534bfb7-kube-api-access-dcj2n\") pod \"swift-operator-controller-manager-68fc8c869-dr7hd\" (UID: \"21f3efdd-0c83-42cb-8b54-b0554534bfb7\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-dr7hd" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.820971 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.822631 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.867226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcj2n\" (UniqueName: \"kubernetes.io/projected/21f3efdd-0c83-42cb-8b54-b0554534bfb7-kube-api-access-dcj2n\") pod \"swift-operator-controller-manager-68fc8c869-dr7hd\" (UID: \"21f3efdd-0c83-42cb-8b54-b0554534bfb7\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-dr7hd" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.878954 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.880424 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.880557 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-849t9\" (UniqueName: \"kubernetes.io/projected/96838cc3-1b9b-41b3-b20e-476319c65436-kube-api-access-849t9\") pod \"telemetry-operator-controller-manager-64b5b76f97-xw4mq\" (UID: \"96838cc3-1b9b-41b3-b20e-476319c65436\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.892210 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-8ht5k" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.903329 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.909761 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.924787 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.926453 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.930385 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-fgp94" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.938160 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.940415 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-849t9\" (UniqueName: \"kubernetes.io/projected/96838cc3-1b9b-41b3-b20e-476319c65436-kube-api-access-849t9\") pod \"telemetry-operator-controller-manager-64b5b76f97-xw4mq\" (UID: \"96838cc3-1b9b-41b3-b20e-476319c65436\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.975952 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5"] Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.976921 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.981386 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.981557 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qrhjh" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.981707 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.984101 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt24r\" (UniqueName: \"kubernetes.io/projected/1058185d-f11d-4a87-9fe6-005f60186329-kube-api-access-bt24r\") pod \"test-operator-controller-manager-56f8bfcd9f-22hg8\" (UID: \"1058185d-f11d-4a87-9fe6-005f60186329\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" Feb 03 12:25:59 crc kubenswrapper[4820]: I0203 12:25:59.990337 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5"] Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.001003 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8l49q"] Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.002365 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8l49q" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.006577 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6gdfz" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.008951 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8l49q"] Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.019075 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-qnh2k"] Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.072452 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-dr7hd" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.095974 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxvh\" (UniqueName: \"kubernetes.io/projected/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-kube-api-access-xlxvh\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.096054 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt24r\" (UniqueName: \"kubernetes.io/projected/1058185d-f11d-4a87-9fe6-005f60186329-kube-api-access-bt24r\") pod \"test-operator-controller-manager-56f8bfcd9f-22hg8\" (UID: \"1058185d-f11d-4a87-9fe6-005f60186329\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.096085 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2bsk\" (UniqueName: \"kubernetes.io/projected/18a84695-492b-42ae-9d72-6e582316ce55-kube-api-access-z2bsk\") pod \"watcher-operator-controller-manager-6d49495bcf-pflss\" (UID: \"18a84695-492b-42ae-9d72-6e582316ce55\") " pod="openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.096147 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfwns\" (UniqueName: \"kubernetes.io/projected/8560a157-03d5-4135-a5e1-32acc68b6e4e-kube-api-access-nfwns\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8l49q\" (UID: \"8560a157-03d5-4135-a5e1-32acc68b6e4e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8l49q" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.096201 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9\" (UID: \"591b67aa-03c7-4cf7-8918-17e2f7a428b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.096262 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.096295 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:00 crc kubenswrapper[4820]: E0203 12:26:00.096960 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:26:00 crc kubenswrapper[4820]: E0203 12:26:00.097018 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert podName:591b67aa-03c7-4cf7-8918-17e2f7a428b0 nodeName:}" failed. No retries permitted until 2026-02-03 12:26:01.096998382 +0000 UTC m=+1278.620074246 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" (UID: "591b67aa-03c7-4cf7-8918-17e2f7a428b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.148095 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt24r\" (UniqueName: \"kubernetes.io/projected/1058185d-f11d-4a87-9fe6-005f60186329-kube-api-access-bt24r\") pod \"test-operator-controller-manager-56f8bfcd9f-22hg8\" (UID: \"1058185d-f11d-4a87-9fe6-005f60186329\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.156390 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.203002 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlxvh\" (UniqueName: \"kubernetes.io/projected/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-kube-api-access-xlxvh\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.203621 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2bsk\" (UniqueName: \"kubernetes.io/projected/18a84695-492b-42ae-9d72-6e582316ce55-kube-api-access-z2bsk\") pod \"watcher-operator-controller-manager-6d49495bcf-pflss\" (UID: \"18a84695-492b-42ae-9d72-6e582316ce55\") " pod="openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.203864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfwns\" (UniqueName: \"kubernetes.io/projected/8560a157-03d5-4135-a5e1-32acc68b6e4e-kube-api-access-nfwns\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8l49q\" (UID: \"8560a157-03d5-4135-a5e1-32acc68b6e4e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8l49q" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.204100 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.204176 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:00 crc kubenswrapper[4820]: E0203 12:26:00.204549 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 12:26:00 crc kubenswrapper[4820]: E0203 12:26:00.204606 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 12:26:00 crc kubenswrapper[4820]: E0203 12:26:00.204673 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs podName:ffe7d059-602c-4fbc-bd5e-4c092cc6f3db nodeName:}" failed. No retries permitted until 2026-02-03 12:26:00.704637006 +0000 UTC m=+1278.227712870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs") pod "openstack-operator-controller-manager-855575688d-cl9c5" (UID: "ffe7d059-602c-4fbc-bd5e-4c092cc6f3db") : secret "metrics-server-cert" not found Feb 03 12:26:00 crc kubenswrapper[4820]: E0203 12:26:00.204742 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs podName:ffe7d059-602c-4fbc-bd5e-4c092cc6f3db nodeName:}" failed. No retries permitted until 2026-02-03 12:26:00.704717908 +0000 UTC m=+1278.227793772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs") pod "openstack-operator-controller-manager-855575688d-cl9c5" (UID: "ffe7d059-602c-4fbc-bd5e-4c092cc6f3db") : secret "webhook-server-cert" not found Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.251871 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlxvh\" (UniqueName: \"kubernetes.io/projected/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-kube-api-access-xlxvh\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.264131 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfwns\" (UniqueName: \"kubernetes.io/projected/8560a157-03d5-4135-a5e1-32acc68b6e4e-kube-api-access-nfwns\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8l49q\" (UID: \"8560a157-03d5-4135-a5e1-32acc68b6e4e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8l49q" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.266053 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2bsk\" (UniqueName: \"kubernetes.io/projected/18a84695-492b-42ae-9d72-6e582316ce55-kube-api-access-z2bsk\") pod \"watcher-operator-controller-manager-6d49495bcf-pflss\" (UID: \"18a84695-492b-42ae-9d72-6e582316ce55\") " pod="openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.340947 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.549567 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.587924 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8l49q" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.768637 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-t5mj4"] Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.777538 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert\") pod \"infra-operator-controller-manager-79955696d6-22gr9\" (UID: \"7ad36bba-9140-4660-b4ed-e873264c9e22\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.777609 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.777674 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:00 crc kubenswrapper[4820]: E0203 12:26:00.778002 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 12:26:00 crc kubenswrapper[4820]: E0203 12:26:00.778084 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs podName:ffe7d059-602c-4fbc-bd5e-4c092cc6f3db nodeName:}" failed. No retries permitted until 2026-02-03 12:26:01.778060753 +0000 UTC m=+1279.301136617 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs") pod "openstack-operator-controller-manager-855575688d-cl9c5" (UID: "ffe7d059-602c-4fbc-bd5e-4c092cc6f3db") : secret "metrics-server-cert" not found Feb 03 12:26:00 crc kubenswrapper[4820]: E0203 12:26:00.778627 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 12:26:00 crc kubenswrapper[4820]: E0203 12:26:00.778675 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert podName:7ad36bba-9140-4660-b4ed-e873264c9e22 nodeName:}" failed. No retries permitted until 2026-02-03 12:26:02.778658869 +0000 UTC m=+1280.301734733 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert") pod "infra-operator-controller-manager-79955696d6-22gr9" (UID: "7ad36bba-9140-4660-b4ed-e873264c9e22") : secret "infra-operator-webhook-server-cert" not found Feb 03 12:26:00 crc kubenswrapper[4820]: E0203 12:26:00.778749 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 12:26:00 crc kubenswrapper[4820]: E0203 12:26:00.778800 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs podName:ffe7d059-602c-4fbc-bd5e-4c092cc6f3db nodeName:}" failed. No retries permitted until 2026-02-03 12:26:01.778779782 +0000 UTC m=+1279.301855656 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs") pod "openstack-operator-controller-manager-855575688d-cl9c5" (UID: "ffe7d059-602c-4fbc-bd5e-4c092cc6f3db") : secret "webhook-server-cert" not found Feb 03 12:26:00 crc kubenswrapper[4820]: I0203 12:26:00.858942 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-qnh2k" event={"ID":"cde1eaee-12a0-47f7-b88a-b1b97d0ed74b","Type":"ContainerStarted","Data":"29186eee78239ad908084bddfd6fc9b063b7159b72b3a1fa3d9bb2f9c7b533c6"} Feb 03 12:26:01 crc kubenswrapper[4820]: W0203 12:26:01.078626 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod101ca31b_ff08_4a49_9cc1_f48fd8679116.slice/crio-7803707a64d020ba2b9d7a3b9ffe6190c509210b359f0babc93ae85e3443b8d4 WatchSource:0}: Error finding container 7803707a64d020ba2b9d7a3b9ffe6190c509210b359f0babc93ae85e3443b8d4: Status 404 returned error can't find the container with id 7803707a64d020ba2b9d7a3b9ffe6190c509210b359f0babc93ae85e3443b8d4 Feb 03 12:26:01 crc kubenswrapper[4820]: I0203 12:26:01.232389 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9\" (UID: \"591b67aa-03c7-4cf7-8918-17e2f7a428b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:26:01 crc kubenswrapper[4820]: E0203 12:26:01.234113 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:26:01 crc kubenswrapper[4820]: E0203 12:26:01.234182 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert podName:591b67aa-03c7-4cf7-8918-17e2f7a428b0 nodeName:}" failed. No retries permitted until 2026-02-03 12:26:03.234160111 +0000 UTC m=+1280.757235985 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" (UID: "591b67aa-03c7-4cf7-8918-17e2f7a428b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:26:01 crc kubenswrapper[4820]: I0203 12:26:01.549797 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j"] Feb 03 12:26:01 crc kubenswrapper[4820]: I0203 12:26:01.875838 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:01 crc kubenswrapper[4820]: I0203 12:26:01.875923 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:01 crc kubenswrapper[4820]: E0203 12:26:01.876214 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 12:26:01 crc kubenswrapper[4820]: E0203 12:26:01.876268 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs podName:ffe7d059-602c-4fbc-bd5e-4c092cc6f3db nodeName:}" failed. No retries permitted until 2026-02-03 12:26:03.876251933 +0000 UTC m=+1281.399327797 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs") pod "openstack-operator-controller-manager-855575688d-cl9c5" (UID: "ffe7d059-602c-4fbc-bd5e-4c092cc6f3db") : secret "metrics-server-cert" not found Feb 03 12:26:01 crc kubenswrapper[4820]: E0203 12:26:01.876479 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 12:26:01 crc kubenswrapper[4820]: E0203 12:26:01.876557 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs podName:ffe7d059-602c-4fbc-bd5e-4c092cc6f3db nodeName:}" failed. No retries permitted until 2026-02-03 12:26:03.876535781 +0000 UTC m=+1281.399611645 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs") pod "openstack-operator-controller-manager-855575688d-cl9c5" (UID: "ffe7d059-602c-4fbc-bd5e-4c092cc6f3db") : secret "webhook-server-cert" not found Feb 03 12:26:01 crc kubenswrapper[4820]: I0203 12:26:01.904444 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j" event={"ID":"29dd9257-532e-48a4-9500-adfc5584ebe0","Type":"ContainerStarted","Data":"67e7d806ca0c77cf66a6a5296347b2ab06ebcd6fd50c9f26b668b1b40c6e78f5"} Feb 03 12:26:01 crc kubenswrapper[4820]: I0203 12:26:01.923671 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-t5mj4" event={"ID":"101ca31b-ff08-4a49-9cc1-f48fd8679116","Type":"ContainerStarted","Data":"7803707a64d020ba2b9d7a3b9ffe6190c509210b359f0babc93ae85e3443b8d4"} Feb 03 12:26:01 crc kubenswrapper[4820]: I0203 12:26:01.944797 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r"] Feb 03 12:26:01 crc kubenswrapper[4820]: I0203 12:26:01.998199 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d"] Feb 03 12:26:02 crc kubenswrapper[4820]: W0203 12:26:02.085564 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c967b2_8f1a_4d0d_a3f9_745e72863b84.slice/crio-f1e25e447255dc9975d72a23eeff2a0eca71847e097f98e44871f51148e73f2d WatchSource:0}: Error finding container f1e25e447255dc9975d72a23eeff2a0eca71847e097f98e44871f51148e73f2d: Status 404 returned error can't find the container with id f1e25e447255dc9975d72a23eeff2a0eca71847e097f98e44871f51148e73f2d Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.464543 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-brrn4"] Feb 03 12:26:02 crc kubenswrapper[4820]: W0203 12:26:02.480919 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56b6c45e_8879_4a30_8b7c_d6c7df8ac6ae.slice/crio-2acee2ad51d82472877e5a5ea4e7e7f092e8911473f97a5bda2c0096dc89b52a WatchSource:0}: Error finding container 2acee2ad51d82472877e5a5ea4e7e7f092e8911473f97a5bda2c0096dc89b52a: Status 404 returned error can't find the container with id 2acee2ad51d82472877e5a5ea4e7e7f092e8911473f97a5bda2c0096dc89b52a Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.500594 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm"] Feb 03 12:26:02 crc kubenswrapper[4820]: W0203 12:26:02.509985 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81450158_204d_45f5_a1bc_de63e889445d.slice/crio-582d59b904a343a49df10b79cca4607a673e766f10b356b1766d65f5c02e0407 WatchSource:0}: Error finding container 582d59b904a343a49df10b79cca4607a673e766f10b356b1766d65f5c02e0407: Status 404 returned error can't find the container with id 582d59b904a343a49df10b79cca4607a673e766f10b356b1766d65f5c02e0407 Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.524658 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6"] Feb 03 12:26:02 crc kubenswrapper[4820]: W0203 12:26:02.534983 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40fd2238_8148_4aa3_8f4e_54ffc1de0805.slice/crio-0c6c220d3627d7c1c9fbdd5b19218d9f12038fb177ee7276e491028619a6d80a WatchSource:0}: Error finding container 0c6c220d3627d7c1c9fbdd5b19218d9f12038fb177ee7276e491028619a6d80a: Status 404 returned error can't find the container with id 0c6c220d3627d7c1c9fbdd5b19218d9f12038fb177ee7276e491028619a6d80a Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.801861 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert\") pod \"infra-operator-controller-manager-79955696d6-22gr9\" (UID: \"7ad36bba-9140-4660-b4ed-e873264c9e22\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:26:02 crc kubenswrapper[4820]: E0203 12:26:02.802075 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 12:26:02 crc kubenswrapper[4820]: E0203 12:26:02.803144 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert podName:7ad36bba-9140-4660-b4ed-e873264c9e22 nodeName:}" failed. No retries permitted until 2026-02-03 12:26:06.80311897 +0000 UTC m=+1284.326194834 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert") pod "infra-operator-controller-manager-79955696d6-22gr9" (UID: "7ad36bba-9140-4660-b4ed-e873264c9e22") : secret "infra-operator-webhook-server-cert" not found Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.856467 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq"] Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.863277 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb"] Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.894584 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7"] Feb 03 12:26:02 crc kubenswrapper[4820]: W0203 12:26:02.904824 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a515408_dc44_4fba_bbe9_8b5f36fbc1d0.slice/crio-f66baaab23a87c18c50e935cc8bdecea082ef12140633e40863df77cac37e12f WatchSource:0}: Error finding container f66baaab23a87c18c50e935cc8bdecea082ef12140633e40863df77cac37e12f: Status 404 returned error can't find the container with id f66baaab23a87c18c50e935cc8bdecea082ef12140633e40863df77cac37e12f Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.917938 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8l49q"] Feb 03 12:26:02 crc kubenswrapper[4820]: W0203 12:26:02.918451 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88eb8fcd_4721_45c2_bb00_23b1dc962283.slice/crio-1602b5ae2074191df02b54b3fe3cf2c25869f679c37639751e575702ba99e6d6 WatchSource:0}: Error finding container 1602b5ae2074191df02b54b3fe3cf2c25869f679c37639751e575702ba99e6d6: Status 404 returned error can't find the container with id 1602b5ae2074191df02b54b3fe3cf2c25869f679c37639751e575702ba99e6d6 Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.932020 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-dr7hd"] Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.959409 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss" event={"ID":"18a84695-492b-42ae-9d72-6e582316ce55","Type":"ContainerStarted","Data":"ef341db432c98b57063360f5e81a5159398de0727be9583877d41ffb28471063"} Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.965406 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-brrn4" event={"ID":"56b6c45e-8879-4a30-8b7c-d6c7df8ac6ae","Type":"ContainerStarted","Data":"2acee2ad51d82472877e5a5ea4e7e7f092e8911473f97a5bda2c0096dc89b52a"} Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.968016 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm" event={"ID":"81450158-204d-45f5-a1bc-de63e889445d","Type":"ContainerStarted","Data":"582d59b904a343a49df10b79cca4607a673e766f10b356b1766d65f5c02e0407"} Feb 03 12:26:02 crc kubenswrapper[4820]: E0203 12:26:02.968221 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-849t9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-xw4mq_openstack-operators(96838cc3-1b9b-41b3-b20e-476319c65436): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 12:26:02 crc kubenswrapper[4820]: E0203 12:26:02.969351 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq" podUID="96838cc3-1b9b-41b3-b20e-476319c65436" Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.973723 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl" event={"ID":"7a515408-dc44-4fba-bbe9-8b5f36fbc1d0","Type":"ContainerStarted","Data":"f66baaab23a87c18c50e935cc8bdecea082ef12140633e40863df77cac37e12f"} Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.974130 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd"] Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.976655 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r" event={"ID":"51c967b2-8f1a-4d0d-a3f9-745e72863b84","Type":"ContainerStarted","Data":"f1e25e447255dc9975d72a23eeff2a0eca71847e097f98e44871f51148e73f2d"} Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.979144 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq" event={"ID":"614c5412-875d-40b1-ad5f-445a941285af","Type":"ContainerStarted","Data":"77599c963db23be9cc7e7e6316d80769a8e63ecf4971e48f71e9aa847e10f805"} Feb 03 12:26:02 crc kubenswrapper[4820]: W0203 12:26:02.980315 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1058185d_f11d_4a87_9fe6_005f60186329.slice/crio-011c56f7a84bdbadf7a4a039ffca4321b0d89b632c7321cd2183ef4ce1dd45c6 WatchSource:0}: Error finding container 011c56f7a84bdbadf7a4a039ffca4321b0d89b632c7321cd2183ef4ce1dd45c6: Status 404 returned error can't find the container with id 011c56f7a84bdbadf7a4a039ffca4321b0d89b632c7321cd2183ef4ce1dd45c6 Feb 03 12:26:02 crc kubenswrapper[4820]: E0203 12:26:02.986491 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bt24r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-22hg8_openstack-operators(1058185d-f11d-4a87-9fe6-005f60186329): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.987371 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss"] Feb 03 12:26:02 crc kubenswrapper[4820]: E0203 12:26:02.987626 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" podUID="1058185d-f11d-4a87-9fe6-005f60186329" Feb 03 12:26:02 crc kubenswrapper[4820]: W0203 12:26:02.987844 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod851ed64f_f147_45d0_a33b_eea29903ec0a.slice/crio-5c0690a41d3251d1d47756b6beb05fcffcecd7cdcd1620ec814cb80bcf99f334 WatchSource:0}: Error finding container 5c0690a41d3251d1d47756b6beb05fcffcecd7cdcd1620ec814cb80bcf99f334: Status 404 returned error can't find the container with id 5c0690a41d3251d1d47756b6beb05fcffcecd7cdcd1620ec814cb80bcf99f334 Feb 03 12:26:02 crc kubenswrapper[4820]: I0203 12:26:02.988091 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d" event={"ID":"7f5efd7c-09f4-42b0-ba17-7a7dc609d914","Type":"ContainerStarted","Data":"da8d87ba3bd41c55357ee6392d93aff253fc4608d6a0d3967596f7444dc4eddb"} Feb 03 12:26:03 crc kubenswrapper[4820]: E0203 12:26:03.003303 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-62hv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-x567r_openstack-operators(851ed64f-f147-45d0-a33b-eea29903ec0a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 12:26:03 crc kubenswrapper[4820]: I0203 12:26:03.003546 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6" event={"ID":"40fd2238-8148-4aa3-8f4e-54ffc1de0805","Type":"ContainerStarted","Data":"0c6c220d3627d7c1c9fbdd5b19218d9f12038fb177ee7276e491028619a6d80a"} Feb 03 12:26:03 crc kubenswrapper[4820]: E0203 12:26:03.004877 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" podUID="851ed64f-f147-45d0-a33b-eea29903ec0a" Feb 03 12:26:03 crc kubenswrapper[4820]: I0203 12:26:03.009186 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl"] Feb 03 12:26:03 crc kubenswrapper[4820]: I0203 12:26:03.025457 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq"] Feb 03 12:26:03 crc kubenswrapper[4820]: I0203 12:26:03.037196 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk"] Feb 03 12:26:03 crc kubenswrapper[4820]: I0203 12:26:03.044558 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8"] Feb 03 12:26:03 crc kubenswrapper[4820]: I0203 12:26:03.060390 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r"] Feb 03 12:26:03 crc kubenswrapper[4820]: I0203 12:26:03.313195 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9\" (UID: \"591b67aa-03c7-4cf7-8918-17e2f7a428b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:26:03 crc kubenswrapper[4820]: E0203 12:26:03.313457 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:26:03 crc kubenswrapper[4820]: E0203 12:26:03.313514 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert podName:591b67aa-03c7-4cf7-8918-17e2f7a428b0 nodeName:}" failed. No retries permitted until 2026-02-03 12:26:07.313495694 +0000 UTC m=+1284.836571558 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" (UID: "591b67aa-03c7-4cf7-8918-17e2f7a428b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:26:03 crc kubenswrapper[4820]: I0203 12:26:03.926614 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:03 crc kubenswrapper[4820]: I0203 12:26:03.926997 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:03 crc kubenswrapper[4820]: E0203 12:26:03.926838 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 12:26:03 crc kubenswrapper[4820]: E0203 12:26:03.927136 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs podName:ffe7d059-602c-4fbc-bd5e-4c092cc6f3db nodeName:}" failed. No retries permitted until 2026-02-03 12:26:07.927111702 +0000 UTC m=+1285.450187596 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs") pod "openstack-operator-controller-manager-855575688d-cl9c5" (UID: "ffe7d059-602c-4fbc-bd5e-4c092cc6f3db") : secret "webhook-server-cert" not found Feb 03 12:26:03 crc kubenswrapper[4820]: E0203 12:26:03.927173 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 12:26:03 crc kubenswrapper[4820]: E0203 12:26:03.927225 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs podName:ffe7d059-602c-4fbc-bd5e-4c092cc6f3db nodeName:}" failed. No retries permitted until 2026-02-03 12:26:07.927210464 +0000 UTC m=+1285.450286328 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs") pod "openstack-operator-controller-manager-855575688d-cl9c5" (UID: "ffe7d059-602c-4fbc-bd5e-4c092cc6f3db") : secret "metrics-server-cert" not found Feb 03 12:26:04 crc kubenswrapper[4820]: I0203 12:26:04.034832 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8l49q" event={"ID":"8560a157-03d5-4135-a5e1-32acc68b6e4e","Type":"ContainerStarted","Data":"e1ccfcae81a65e7f61d17af339854b04748432aff93cc6708cc1cda5b3b91111"} Feb 03 12:26:04 crc kubenswrapper[4820]: I0203 12:26:04.044124 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb" event={"ID":"88eb8fcd-4721-45c2-bb00-23b1dc962283","Type":"ContainerStarted","Data":"1602b5ae2074191df02b54b3fe3cf2c25869f679c37639751e575702ba99e6d6"} Feb 03 12:26:04 crc kubenswrapper[4820]: I0203 12:26:04.052254 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7" event={"ID":"4d0ea57e-5eb3-4624-a299-b9a7ad6f2bb0","Type":"ContainerStarted","Data":"436f1da061e096cb1add78ba9ee81b020c9a482d68a47fa985a7a1e1485a7c17"} Feb 03 12:26:04 crc kubenswrapper[4820]: I0203 12:26:04.061158 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk" event={"ID":"4ebad58b-3e3b-4bcb-9a80-dedd97e940d0","Type":"ContainerStarted","Data":"a63b9032d636c848050aaca2669509acbf7ae142fdfe3a4260c35d4c865bac2c"} Feb 03 12:26:04 crc kubenswrapper[4820]: I0203 12:26:04.065250 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq" event={"ID":"96838cc3-1b9b-41b3-b20e-476319c65436","Type":"ContainerStarted","Data":"0b720af859b13ec023a400ebabb3f03c8e696d5ba12364f55b69c26f9a19153a"} Feb 03 12:26:04 crc kubenswrapper[4820]: E0203 12:26:04.068079 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq" podUID="96838cc3-1b9b-41b3-b20e-476319c65436" Feb 03 12:26:04 crc kubenswrapper[4820]: I0203 12:26:04.070665 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd" event={"ID":"b12dfc88-bdbd-4874-b397-9273a669e57f","Type":"ContainerStarted","Data":"0ec0964858d555a75ce500504571403df1fad2bdfb90783414711487c869d03e"} Feb 03 12:26:04 crc kubenswrapper[4820]: I0203 12:26:04.092645 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-dr7hd" event={"ID":"21f3efdd-0c83-42cb-8b54-b0554534bfb7","Type":"ContainerStarted","Data":"838401f2df6198f3f1797dd220f477044eaadb44e1d43a42b751f3f7817bf9ba"} Feb 03 12:26:04 crc kubenswrapper[4820]: I0203 12:26:04.111662 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" event={"ID":"851ed64f-f147-45d0-a33b-eea29903ec0a","Type":"ContainerStarted","Data":"5c0690a41d3251d1d47756b6beb05fcffcecd7cdcd1620ec814cb80bcf99f334"} Feb 03 12:26:04 crc kubenswrapper[4820]: E0203 12:26:04.113977 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" podUID="851ed64f-f147-45d0-a33b-eea29903ec0a" Feb 03 12:26:04 crc kubenswrapper[4820]: I0203 12:26:04.115080 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" event={"ID":"1058185d-f11d-4a87-9fe6-005f60186329","Type":"ContainerStarted","Data":"011c56f7a84bdbadf7a4a039ffca4321b0d89b632c7321cd2183ef4ce1dd45c6"} Feb 03 12:26:04 crc kubenswrapper[4820]: E0203 12:26:04.116750 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" podUID="1058185d-f11d-4a87-9fe6-005f60186329" Feb 03 12:26:05 crc kubenswrapper[4820]: E0203 12:26:05.215714 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq" podUID="96838cc3-1b9b-41b3-b20e-476319c65436" Feb 03 12:26:05 crc kubenswrapper[4820]: E0203 12:26:05.216181 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" podUID="1058185d-f11d-4a87-9fe6-005f60186329" Feb 03 12:26:05 crc kubenswrapper[4820]: E0203 12:26:05.216959 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" podUID="851ed64f-f147-45d0-a33b-eea29903ec0a" Feb 03 12:26:07 crc kubenswrapper[4820]: I0203 12:26:07.302960 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert\") pod \"infra-operator-controller-manager-79955696d6-22gr9\" (UID: \"7ad36bba-9140-4660-b4ed-e873264c9e22\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:26:07 crc kubenswrapper[4820]: E0203 12:26:07.303983 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 12:26:07 crc kubenswrapper[4820]: E0203 12:26:07.304047 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert podName:7ad36bba-9140-4660-b4ed-e873264c9e22 nodeName:}" failed. No retries permitted until 2026-02-03 12:26:15.30402748 +0000 UTC m=+1292.827103344 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert") pod "infra-operator-controller-manager-79955696d6-22gr9" (UID: "7ad36bba-9140-4660-b4ed-e873264c9e22") : secret "infra-operator-webhook-server-cert" not found Feb 03 12:26:07 crc kubenswrapper[4820]: I0203 12:26:07.518150 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9\" (UID: \"591b67aa-03c7-4cf7-8918-17e2f7a428b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:26:07 crc kubenswrapper[4820]: E0203 12:26:07.518516 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:26:07 crc kubenswrapper[4820]: E0203 12:26:07.518605 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert podName:591b67aa-03c7-4cf7-8918-17e2f7a428b0 nodeName:}" failed. No retries permitted until 2026-02-03 12:26:15.518587578 +0000 UTC m=+1293.041663442 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" (UID: "591b67aa-03c7-4cf7-8918-17e2f7a428b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:26:08 crc kubenswrapper[4820]: I0203 12:26:08.395938 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:08 crc kubenswrapper[4820]: I0203 12:26:08.396003 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:08 crc kubenswrapper[4820]: E0203 12:26:08.396217 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 12:26:08 crc kubenswrapper[4820]: E0203 12:26:08.396273 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs podName:ffe7d059-602c-4fbc-bd5e-4c092cc6f3db nodeName:}" failed. No retries permitted until 2026-02-03 12:26:16.396253638 +0000 UTC m=+1293.919329502 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs") pod "openstack-operator-controller-manager-855575688d-cl9c5" (UID: "ffe7d059-602c-4fbc-bd5e-4c092cc6f3db") : secret "metrics-server-cert" not found Feb 03 12:26:08 crc kubenswrapper[4820]: E0203 12:26:08.397925 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 12:26:08 crc kubenswrapper[4820]: E0203 12:26:08.397972 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs podName:ffe7d059-602c-4fbc-bd5e-4c092cc6f3db nodeName:}" failed. No retries permitted until 2026-02-03 12:26:16.397957384 +0000 UTC m=+1293.921033248 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs") pod "openstack-operator-controller-manager-855575688d-cl9c5" (UID: "ffe7d059-602c-4fbc-bd5e-4c092cc6f3db") : secret "webhook-server-cert" not found Feb 03 12:26:15 crc kubenswrapper[4820]: I0203 12:26:15.347560 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert\") pod \"infra-operator-controller-manager-79955696d6-22gr9\" (UID: \"7ad36bba-9140-4660-b4ed-e873264c9e22\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:26:15 crc kubenswrapper[4820]: E0203 12:26:15.348120 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 12:26:15 crc kubenswrapper[4820]: E0203 12:26:15.348184 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert podName:7ad36bba-9140-4660-b4ed-e873264c9e22 nodeName:}" failed. No retries permitted until 2026-02-03 12:26:31.348163155 +0000 UTC m=+1308.871239019 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert") pod "infra-operator-controller-manager-79955696d6-22gr9" (UID: "7ad36bba-9140-4660-b4ed-e873264c9e22") : secret "infra-operator-webhook-server-cert" not found Feb 03 12:26:15 crc kubenswrapper[4820]: I0203 12:26:15.773954 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9\" (UID: \"591b67aa-03c7-4cf7-8918-17e2f7a428b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:26:15 crc kubenswrapper[4820]: E0203 12:26:15.774230 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:26:15 crc kubenswrapper[4820]: E0203 12:26:15.774303 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert podName:591b67aa-03c7-4cf7-8918-17e2f7a428b0 nodeName:}" failed. No retries permitted until 2026-02-03 12:26:31.774284929 +0000 UTC m=+1309.297360793 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" (UID: "591b67aa-03c7-4cf7-8918-17e2f7a428b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:26:16 crc kubenswrapper[4820]: I0203 12:26:16.486304 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:16 crc kubenswrapper[4820]: I0203 12:26:16.486387 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:16 crc kubenswrapper[4820]: I0203 12:26:16.520922 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-metrics-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:16 crc kubenswrapper[4820]: I0203 12:26:16.533531 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffe7d059-602c-4fbc-bd5e-4c092cc6f3db-webhook-certs\") pod \"openstack-operator-controller-manager-855575688d-cl9c5\" (UID: \"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db\") " pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:16 crc kubenswrapper[4820]: I0203 12:26:16.768122 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:23 crc kubenswrapper[4820]: E0203 12:26:23.969441 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 03 12:26:23 crc kubenswrapper[4820]: E0203 12:26:23.970107 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nfwns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-8l49q_openstack-operators(8560a157-03d5-4135-a5e1-32acc68b6e4e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:23 crc kubenswrapper[4820]: E0203 12:26:23.971348 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8l49q" podUID="8560a157-03d5-4135-a5e1-32acc68b6e4e" Feb 03 12:26:24 crc kubenswrapper[4820]: E0203 12:26:24.310768 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8l49q" podUID="8560a157-03d5-4135-a5e1-32acc68b6e4e" Feb 03 12:26:24 crc kubenswrapper[4820]: E0203 12:26:24.721656 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Feb 03 12:26:24 crc kubenswrapper[4820]: E0203 12:26:24.721943 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-649jv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-5lxrd_openstack-operators(b12dfc88-bdbd-4874-b397-9273a669e57f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:24 crc kubenswrapper[4820]: E0203 12:26:24.723271 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd" podUID="b12dfc88-bdbd-4874-b397-9273a669e57f" Feb 03 12:26:25 crc kubenswrapper[4820]: E0203 12:26:25.320687 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd" podUID="b12dfc88-bdbd-4874-b397-9273a669e57f" Feb 03 12:26:25 crc kubenswrapper[4820]: E0203 12:26:25.702376 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Feb 03 12:26:25 crc kubenswrapper[4820]: E0203 12:26:25.702635 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6q4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-5dmwb_openstack-operators(88eb8fcd-4721-45c2-bb00-23b1dc962283): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:25 crc kubenswrapper[4820]: E0203 12:26:25.704284 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb" podUID="88eb8fcd-4721-45c2-bb00-23b1dc962283" Feb 03 12:26:26 crc kubenswrapper[4820]: E0203 12:26:26.328572 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb" podUID="88eb8fcd-4721-45c2-bb00-23b1dc962283" Feb 03 12:26:26 crc kubenswrapper[4820]: E0203 12:26:26.435462 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Feb 03 12:26:26 crc kubenswrapper[4820]: E0203 12:26:26.435693 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-blsl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-4tmqm_openstack-operators(81450158-204d-45f5-a1bc-de63e889445d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:26 crc kubenswrapper[4820]: E0203 12:26:26.436929 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm" podUID="81450158-204d-45f5-a1bc-de63e889445d" Feb 03 12:26:27 crc kubenswrapper[4820]: E0203 12:26:27.335034 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm" podUID="81450158-204d-45f5-a1bc-de63e889445d" Feb 03 12:26:27 crc kubenswrapper[4820]: E0203 12:26:27.336935 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Feb 03 12:26:27 crc kubenswrapper[4820]: E0203 12:26:27.337165 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xh254,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-xkj2j_openstack-operators(29dd9257-532e-48a4-9500-adfc5584ebe0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:27 crc kubenswrapper[4820]: E0203 12:26:27.338839 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j" podUID="29dd9257-532e-48a4-9500-adfc5584ebe0" Feb 03 12:26:28 crc kubenswrapper[4820]: E0203 12:26:28.189418 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Feb 03 12:26:28 crc kubenswrapper[4820]: E0203 12:26:28.189675 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gt9d9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-rdbrk_openstack-operators(4ebad58b-3e3b-4bcb-9a80-dedd97e940d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:28 crc kubenswrapper[4820]: E0203 12:26:28.190948 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk" podUID="4ebad58b-3e3b-4bcb-9a80-dedd97e940d0" Feb 03 12:26:28 crc kubenswrapper[4820]: E0203 12:26:28.344118 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk" podUID="4ebad58b-3e3b-4bcb-9a80-dedd97e940d0" Feb 03 12:26:28 crc kubenswrapper[4820]: E0203 12:26:28.345440 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j" podUID="29dd9257-532e-48a4-9500-adfc5584ebe0" Feb 03 12:26:28 crc kubenswrapper[4820]: E0203 12:26:28.908618 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Feb 03 12:26:28 crc kubenswrapper[4820]: E0203 12:26:28.908945 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gv2tm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-6fw2d_openstack-operators(7f5efd7c-09f4-42b0-ba17-7a7dc609d914): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:28 crc kubenswrapper[4820]: E0203 12:26:28.910221 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d" podUID="7f5efd7c-09f4-42b0-ba17-7a7dc609d914" Feb 03 12:26:29 crc kubenswrapper[4820]: E0203 12:26:29.351330 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d" podUID="7f5efd7c-09f4-42b0-ba17-7a7dc609d914" Feb 03 12:26:31 crc kubenswrapper[4820]: I0203 12:26:31.374211 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert\") pod \"infra-operator-controller-manager-79955696d6-22gr9\" (UID: \"7ad36bba-9140-4660-b4ed-e873264c9e22\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:26:31 crc kubenswrapper[4820]: I0203 12:26:31.380574 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ad36bba-9140-4660-b4ed-e873264c9e22-cert\") pod \"infra-operator-controller-manager-79955696d6-22gr9\" (UID: \"7ad36bba-9140-4660-b4ed-e873264c9e22\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:26:31 crc kubenswrapper[4820]: E0203 12:26:31.387874 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898" Feb 03 12:26:31 crc kubenswrapper[4820]: E0203 12:26:31.388122 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rg7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d874c8fc-z8jk7_openstack-operators(4d0ea57e-5eb3-4624-a299-b9a7ad6f2bb0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:31 crc kubenswrapper[4820]: E0203 12:26:31.391187 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7" podUID="4d0ea57e-5eb3-4624-a299-b9a7ad6f2bb0" Feb 03 12:26:31 crc kubenswrapper[4820]: I0203 12:26:31.618245 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:26:31 crc kubenswrapper[4820]: I0203 12:26:31.798801 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9\" (UID: \"591b67aa-03c7-4cf7-8918-17e2f7a428b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:26:31 crc kubenswrapper[4820]: I0203 12:26:31.804948 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/591b67aa-03c7-4cf7-8918-17e2f7a428b0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9\" (UID: \"591b67aa-03c7-4cf7-8918-17e2f7a428b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:26:32 crc kubenswrapper[4820]: I0203 12:26:32.047031 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:26:32 crc kubenswrapper[4820]: E0203 12:26:32.130110 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Feb 03 12:26:32 crc kubenswrapper[4820]: E0203 12:26:32.130411 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2xmhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-wsb7r_openstack-operators(51c967b2-8f1a-4d0d-a3f9-745e72863b84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:32 crc kubenswrapper[4820]: E0203 12:26:32.131635 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r" podUID="51c967b2-8f1a-4d0d-a3f9-745e72863b84" Feb 03 12:26:32 crc kubenswrapper[4820]: E0203 12:26:32.374188 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r" podUID="51c967b2-8f1a-4d0d-a3f9-745e72863b84" Feb 03 12:26:32 crc kubenswrapper[4820]: E0203 12:26:32.374210 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7" podUID="4d0ea57e-5eb3-4624-a299-b9a7ad6f2bb0" Feb 03 12:26:32 crc kubenswrapper[4820]: E0203 12:26:32.852990 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Feb 03 12:26:32 crc kubenswrapper[4820]: E0203 12:26:32.853204 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5nd5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-qnmp6_openstack-operators(40fd2238-8148-4aa3-8f4e-54ffc1de0805): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:32 crc kubenswrapper[4820]: E0203 12:26:32.854341 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6" podUID="40fd2238-8148-4aa3-8f4e-54ffc1de0805" Feb 03 12:26:33 crc kubenswrapper[4820]: E0203 12:26:33.384505 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6" podUID="40fd2238-8148-4aa3-8f4e-54ffc1de0805" Feb 03 12:26:33 crc kubenswrapper[4820]: E0203 12:26:33.556413 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Feb 03 12:26:33 crc kubenswrapper[4820]: E0203 12:26:33.556648 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bt24r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-22hg8_openstack-operators(1058185d-f11d-4a87-9fe6-005f60186329): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:33 crc kubenswrapper[4820]: E0203 12:26:33.557834 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" podUID="1058185d-f11d-4a87-9fe6-005f60186329" Feb 03 12:26:34 crc kubenswrapper[4820]: E0203 12:26:34.966035 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Feb 03 12:26:34 crc kubenswrapper[4820]: E0203 12:26:34.966573 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-62hv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-x567r_openstack-operators(851ed64f-f147-45d0-a33b-eea29903ec0a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:34 crc kubenswrapper[4820]: E0203 12:26:34.967756 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" podUID="851ed64f-f147-45d0-a33b-eea29903ec0a" Feb 03 12:26:35 crc kubenswrapper[4820]: E0203 12:26:35.057233 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/openstack-k8s-operators/watcher-operator:cf0c613c7e443019101dd3ad5c06c41894220e9d" Feb 03 12:26:35 crc kubenswrapper[4820]: E0203 12:26:35.057299 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/openstack-k8s-operators/watcher-operator:cf0c613c7e443019101dd3ad5c06c41894220e9d" Feb 03 12:26:35 crc kubenswrapper[4820]: E0203 12:26:35.057500 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.50:5001/openstack-k8s-operators/watcher-operator:cf0c613c7e443019101dd3ad5c06c41894220e9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z2bsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6d49495bcf-pflss_openstack-operators(18a84695-492b-42ae-9d72-6e582316ce55): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:35 crc kubenswrapper[4820]: E0203 12:26:35.058724 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss" podUID="18a84695-492b-42ae-9d72-6e582316ce55" Feb 03 12:26:35 crc kubenswrapper[4820]: E0203 12:26:35.413483 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.50:5001/openstack-k8s-operators/watcher-operator:cf0c613c7e443019101dd3ad5c06c41894220e9d\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss" podUID="18a84695-492b-42ae-9d72-6e582316ce55" Feb 03 12:26:35 crc kubenswrapper[4820]: E0203 12:26:35.882495 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Feb 03 12:26:35 crc kubenswrapper[4820]: E0203 12:26:35.882806 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-srxd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-4fgnl_openstack-operators(7a515408-dc44-4fba-bbe9-8b5f36fbc1d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:35 crc kubenswrapper[4820]: E0203 12:26:35.884126 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl" podUID="7a515408-dc44-4fba-bbe9-8b5f36fbc1d0" Feb 03 12:26:36 crc kubenswrapper[4820]: E0203 12:26:36.423060 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl" podUID="7a515408-dc44-4fba-bbe9-8b5f36fbc1d0" Feb 03 12:26:36 crc kubenswrapper[4820]: E0203 12:26:36.505402 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Feb 03 12:26:36 crc kubenswrapper[4820]: E0203 12:26:36.505651 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4vdbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-9rprq_openstack-operators(614c5412-875d-40b1-ad5f-445a941285af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:26:36 crc kubenswrapper[4820]: E0203 12:26:36.506877 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq" podUID="614c5412-875d-40b1-ad5f-445a941285af" Feb 03 12:26:36 crc kubenswrapper[4820]: I0203 12:26:36.902320 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9"] Feb 03 12:26:36 crc kubenswrapper[4820]: I0203 12:26:36.951449 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-22gr9"] Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.218075 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5"] Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.442800 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq" event={"ID":"96838cc3-1b9b-41b3-b20e-476319c65436","Type":"ContainerStarted","Data":"86ff53bd810ed24019e7fb5141faad0bcfc9feee0e195042940b330532b6dec7"} Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.444166 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq" Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.446881 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-qnh2k" event={"ID":"cde1eaee-12a0-47f7-b88a-b1b97d0ed74b","Type":"ContainerStarted","Data":"7084c4d49a69235aad0ba148f52c22296ef51ce82104a39d1c45af51be591d41"} Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.447757 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-qnh2k" Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.465727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-brrn4" event={"ID":"56b6c45e-8879-4a30-8b7c-d6c7df8ac6ae","Type":"ContainerStarted","Data":"caec84bd124cadab1e4ff9ec9b186c8063dceef104c2fe3c83922408981ac434"} Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.466543 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-brrn4" Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.494751 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq" podStartSLOduration=4.925074402 podStartE2EDuration="38.494730252s" podCreationTimestamp="2026-02-03 12:25:59 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.968079001 +0000 UTC m=+1280.491154865" lastFinishedPulling="2026-02-03 12:26:36.537734851 +0000 UTC m=+1314.060810715" observedRunningTime="2026-02-03 12:26:37.48073516 +0000 UTC m=+1315.003811024" watchObservedRunningTime="2026-02-03 12:26:37.494730252 +0000 UTC m=+1315.017806116" Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.495235 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" event={"ID":"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db","Type":"ContainerStarted","Data":"8da86752dd1098ccd7a2647cf201be77fa8b2fdf3afb57c94d6b4b57e974e3a3"} Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.497530 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" event={"ID":"591b67aa-03c7-4cf7-8918-17e2f7a428b0","Type":"ContainerStarted","Data":"14bcb6b86cc283fbacff63e603b44772189ae1a9aa4b2b2f270db3451e1175a6"} Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.503672 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-dr7hd" event={"ID":"21f3efdd-0c83-42cb-8b54-b0554534bfb7","Type":"ContainerStarted","Data":"589ed56c0084a051e1b465e624b85e838582cfc8f3e9729703dfea7097425328"} Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.504734 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-dr7hd" Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.512199 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" event={"ID":"7ad36bba-9140-4660-b4ed-e873264c9e22","Type":"ContainerStarted","Data":"6a29f2441290a8d82170aaff7e15bf4bdebaad1afdcf7d62eebbc539cf5f7cc5"} Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.517511 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-brrn4" podStartSLOduration=7.043488944 podStartE2EDuration="39.517491047s" podCreationTimestamp="2026-02-03 12:25:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.48433763 +0000 UTC m=+1280.007413494" lastFinishedPulling="2026-02-03 12:26:34.958339733 +0000 UTC m=+1312.481415597" observedRunningTime="2026-02-03 12:26:37.512735941 +0000 UTC m=+1315.035811815" watchObservedRunningTime="2026-02-03 12:26:37.517491047 +0000 UTC m=+1315.040566911" Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.523982 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-t5mj4" event={"ID":"101ca31b-ff08-4a49-9cc1-f48fd8679116","Type":"ContainerStarted","Data":"74cf13bea0f8306ad73a881fb5ce013c55a5574feaf5f142134ab26b27264e58"} Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.524030 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-t5mj4" Feb 03 12:26:37 crc kubenswrapper[4820]: E0203 12:26:37.525227 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq" podUID="614c5412-875d-40b1-ad5f-445a941285af" Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.541412 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-qnh2k" podStartSLOduration=5.207105981 podStartE2EDuration="39.541393912s" podCreationTimestamp="2026-02-03 12:25:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:00.006559796 +0000 UTC m=+1277.529635660" lastFinishedPulling="2026-02-03 12:26:34.340847727 +0000 UTC m=+1311.863923591" observedRunningTime="2026-02-03 12:26:37.532760323 +0000 UTC m=+1315.055836187" watchObservedRunningTime="2026-02-03 12:26:37.541393912 +0000 UTC m=+1315.064469776" Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.599620 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-dr7hd" podStartSLOduration=5.683582163 podStartE2EDuration="38.59958318s" podCreationTimestamp="2026-02-03 12:25:59 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.942373372 +0000 UTC m=+1280.465449236" lastFinishedPulling="2026-02-03 12:26:35.858374389 +0000 UTC m=+1313.381450253" observedRunningTime="2026-02-03 12:26:37.59467677 +0000 UTC m=+1315.117752634" watchObservedRunningTime="2026-02-03 12:26:37.59958318 +0000 UTC m=+1315.122659044" Feb 03 12:26:37 crc kubenswrapper[4820]: I0203 12:26:37.633015 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-t5mj4" podStartSLOduration=5.775886286 podStartE2EDuration="39.632985778s" podCreationTimestamp="2026-02-03 12:25:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:01.101236371 +0000 UTC m=+1278.624312235" lastFinishedPulling="2026-02-03 12:26:34.958335863 +0000 UTC m=+1312.481411727" observedRunningTime="2026-02-03 12:26:37.617706101 +0000 UTC m=+1315.140781965" watchObservedRunningTime="2026-02-03 12:26:37.632985778 +0000 UTC m=+1315.156061662" Feb 03 12:26:38 crc kubenswrapper[4820]: I0203 12:26:38.530975 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" event={"ID":"ffe7d059-602c-4fbc-bd5e-4c092cc6f3db","Type":"ContainerStarted","Data":"00a5438f68e75464314f2a0c165fe3810365ccaf260dbd31e0e3130aae14d39c"} Feb 03 12:26:38 crc kubenswrapper[4820]: I0203 12:26:38.531412 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:38 crc kubenswrapper[4820]: I0203 12:26:38.588778 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" podStartSLOduration=39.588755067 podStartE2EDuration="39.588755067s" podCreationTimestamp="2026-02-03 12:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:26:38.582663585 +0000 UTC m=+1316.105739469" watchObservedRunningTime="2026-02-03 12:26:38.588755067 +0000 UTC m=+1316.111830931" Feb 03 12:26:39 crc kubenswrapper[4820]: I0203 12:26:39.570621 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm" event={"ID":"81450158-204d-45f5-a1bc-de63e889445d","Type":"ContainerStarted","Data":"4ff55d752e7c98fcda5fc8b1c5bd40854ecd96b7182154e5d547b04052a7c2bb"} Feb 03 12:26:39 crc kubenswrapper[4820]: I0203 12:26:39.641761 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm" podStartSLOduration=5.572014075 podStartE2EDuration="41.641735051s" podCreationTimestamp="2026-02-03 12:25:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.514314265 +0000 UTC m=+1280.037390129" lastFinishedPulling="2026-02-03 12:26:38.584035241 +0000 UTC m=+1316.107111105" observedRunningTime="2026-02-03 12:26:39.632953386 +0000 UTC m=+1317.156029260" watchObservedRunningTime="2026-02-03 12:26:39.641735051 +0000 UTC m=+1317.164810935" Feb 03 12:26:44 crc kubenswrapper[4820]: E0203 12:26:44.164264 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" podUID="1058185d-f11d-4a87-9fe6-005f60186329" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.418723 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d" event={"ID":"7f5efd7c-09f4-42b0-ba17-7a7dc609d914","Type":"ContainerStarted","Data":"d49d5f6eb9168a97cdf7b48c42e06d8774515375eea869f0201452de99eaa711"} Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.420262 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.425393 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk" event={"ID":"4ebad58b-3e3b-4bcb-9a80-dedd97e940d0","Type":"ContainerStarted","Data":"461294b55f2e61e89b278f5b4162f45ab8ecd374a9e610a45ed48e1fe2f141f4"} Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.425670 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.429956 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd" event={"ID":"b12dfc88-bdbd-4874-b397-9273a669e57f","Type":"ContainerStarted","Data":"cea96ac33ec20aff29ee0bd0707035a25a50a3567d1935f0223e363ee4120af7"} Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.430285 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.432555 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8l49q" event={"ID":"8560a157-03d5-4135-a5e1-32acc68b6e4e","Type":"ContainerStarted","Data":"3df31b0cbe44258b77eb3658022c5ea1347ca0620c653d710dc717d66cba7301"} Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.439524 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb" event={"ID":"88eb8fcd-4721-45c2-bb00-23b1dc962283","Type":"ContainerStarted","Data":"510c4eb6262078d0b20c6b7bd909331fcfa0ad20e11e2ba5837f43ac625e32b2"} Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.439935 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.441804 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" event={"ID":"7ad36bba-9140-4660-b4ed-e873264c9e22","Type":"ContainerStarted","Data":"f7e241d3879cd07aa61c5868a26eb1b885358e64165c107c3854d66357f621fb"} Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.442954 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.449291 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7" event={"ID":"4d0ea57e-5eb3-4624-a299-b9a7ad6f2bb0","Type":"ContainerStarted","Data":"c370ee2afa1155020f0071537350899fe56074f06855a693b8590a462d733c6b"} Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.449826 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.456352 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j" event={"ID":"29dd9257-532e-48a4-9500-adfc5584ebe0","Type":"ContainerStarted","Data":"a492967bee4ee41f80a74a27ed6d76f8fbbe4b2522c4acfae8ab3e8c08cff566"} Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.456937 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.464192 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" event={"ID":"591b67aa-03c7-4cf7-8918-17e2f7a428b0","Type":"ContainerStarted","Data":"e7a1e5645513a69de80c53235db7cec65ce304826f0092ab63e655a621971910"} Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.465006 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.466869 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d" podStartSLOduration=4.851499489 podStartE2EDuration="48.466796744s" podCreationTimestamp="2026-02-03 12:25:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.100824923 +0000 UTC m=+1279.623900787" lastFinishedPulling="2026-02-03 12:26:45.716122178 +0000 UTC m=+1323.239198042" observedRunningTime="2026-02-03 12:26:46.460793735 +0000 UTC m=+1323.983869609" watchObservedRunningTime="2026-02-03 12:26:46.466796744 +0000 UTC m=+1323.989872608" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.495665 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb" podStartSLOduration=6.342257061 podStartE2EDuration="48.495639161s" podCreationTimestamp="2026-02-03 12:25:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.94447429 +0000 UTC m=+1280.467550154" lastFinishedPulling="2026-02-03 12:26:45.09785639 +0000 UTC m=+1322.620932254" observedRunningTime="2026-02-03 12:26:46.489684742 +0000 UTC m=+1324.012760636" watchObservedRunningTime="2026-02-03 12:26:46.495639161 +0000 UTC m=+1324.018715045" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.516601 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j" podStartSLOduration=5.046545562 podStartE2EDuration="48.516583748s" podCreationTimestamp="2026-02-03 12:25:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:01.628260477 +0000 UTC m=+1279.151336341" lastFinishedPulling="2026-02-03 12:26:45.098298663 +0000 UTC m=+1322.621374527" observedRunningTime="2026-02-03 12:26:46.515094038 +0000 UTC m=+1324.038169912" watchObservedRunningTime="2026-02-03 12:26:46.516583748 +0000 UTC m=+1324.039659612" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.534504 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7" podStartSLOduration=6.404264862 podStartE2EDuration="48.534486713s" podCreationTimestamp="2026-02-03 12:25:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.967705771 +0000 UTC m=+1280.490781635" lastFinishedPulling="2026-02-03 12:26:45.097927622 +0000 UTC m=+1322.621003486" observedRunningTime="2026-02-03 12:26:46.534033251 +0000 UTC m=+1324.057109135" watchObservedRunningTime="2026-02-03 12:26:46.534486713 +0000 UTC m=+1324.057562577" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.570557 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk" podStartSLOduration=6.4203405270000005 podStartE2EDuration="48.570532802s" podCreationTimestamp="2026-02-03 12:25:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.94817068 +0000 UTC m=+1280.471246544" lastFinishedPulling="2026-02-03 12:26:45.098362955 +0000 UTC m=+1322.621438819" observedRunningTime="2026-02-03 12:26:46.562760486 +0000 UTC m=+1324.085836350" watchObservedRunningTime="2026-02-03 12:26:46.570532802 +0000 UTC m=+1324.093608666" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.760475 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd" podStartSLOduration=5.611767318 podStartE2EDuration="47.760448751s" podCreationTimestamp="2026-02-03 12:25:59 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.949197918 +0000 UTC m=+1280.472273782" lastFinishedPulling="2026-02-03 12:26:45.097879351 +0000 UTC m=+1322.620955215" observedRunningTime="2026-02-03 12:26:46.751869973 +0000 UTC m=+1324.274945847" watchObservedRunningTime="2026-02-03 12:26:46.760448751 +0000 UTC m=+1324.283524615" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.832585 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" podStartSLOduration=40.697572429 podStartE2EDuration="48.832564358s" podCreationTimestamp="2026-02-03 12:25:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:36.953974836 +0000 UTC m=+1314.477050710" lastFinishedPulling="2026-02-03 12:26:45.088966775 +0000 UTC m=+1322.612042639" observedRunningTime="2026-02-03 12:26:46.825339746 +0000 UTC m=+1324.348415620" watchObservedRunningTime="2026-02-03 12:26:46.832564358 +0000 UTC m=+1324.355640222" Feb 03 12:26:46 crc kubenswrapper[4820]: I0203 12:26:46.846967 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8l49q" podStartSLOduration=5.677238577 podStartE2EDuration="47.846946641s" podCreationTimestamp="2026-02-03 12:25:59 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.93012453 +0000 UTC m=+1280.453200394" lastFinishedPulling="2026-02-03 12:26:45.099832594 +0000 UTC m=+1322.622908458" observedRunningTime="2026-02-03 12:26:46.844719341 +0000 UTC m=+1324.367795205" watchObservedRunningTime="2026-02-03 12:26:46.846946641 +0000 UTC m=+1324.370022515" Feb 03 12:26:47 crc kubenswrapper[4820]: I0203 12:26:47.217936 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" Feb 03 12:26:47 crc kubenswrapper[4820]: I0203 12:26:47.225561 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" podStartSLOduration=40.053607884 podStartE2EDuration="48.225541075s" podCreationTimestamp="2026-02-03 12:25:59 +0000 UTC" firstStartedPulling="2026-02-03 12:26:36.912197755 +0000 UTC m=+1314.435273619" lastFinishedPulling="2026-02-03 12:26:45.084130946 +0000 UTC m=+1322.607206810" observedRunningTime="2026-02-03 12:26:47.220938063 +0000 UTC m=+1324.744013927" watchObservedRunningTime="2026-02-03 12:26:47.225541075 +0000 UTC m=+1324.748616939" Feb 03 12:26:49 crc kubenswrapper[4820]: I0203 12:26:49.425152 4820 patch_prober.go:28] interesting pod/oauth-openshift-66b89c787d-85mk9 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.69:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:26:49 crc kubenswrapper[4820]: I0203 12:26:49.425481 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" podUID="7d558b47-1809-4483-bb1b-8b82036ebda8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.69:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:26:49 crc kubenswrapper[4820]: I0203 12:26:49.425705 4820 patch_prober.go:28] interesting pod/oauth-openshift-66b89c787d-85mk9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.69:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:26:49 crc kubenswrapper[4820]: I0203 12:26:49.425722 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66b89c787d-85mk9" podUID="7d558b47-1809-4483-bb1b-8b82036ebda8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.69:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:26:49 crc kubenswrapper[4820]: E0203 12:26:49.623745 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" podUID="851ed64f-f147-45d0-a33b-eea29903ec0a" Feb 03 12:26:49 crc kubenswrapper[4820]: I0203 12:26:49.682937 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss" event={"ID":"18a84695-492b-42ae-9d72-6e582316ce55","Type":"ContainerStarted","Data":"85ad289bbd63b20a80693996bb6355c8cb3add74654055a132eec9f8694f7abc"} Feb 03 12:26:49 crc kubenswrapper[4820]: I0203 12:26:49.683000 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm" Feb 03 12:26:49 crc kubenswrapper[4820]: I0203 12:26:49.684123 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss" Feb 03 12:26:49 crc kubenswrapper[4820]: I0203 12:26:49.685558 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r" event={"ID":"51c967b2-8f1a-4d0d-a3f9-745e72863b84","Type":"ContainerStarted","Data":"83382121776e3649d02a99303df08d39ef19a405efe69128ae77699dd827a66f"} Feb 03 12:26:49 crc kubenswrapper[4820]: I0203 12:26:49.685939 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r" Feb 03 12:26:49 crc kubenswrapper[4820]: I0203 12:26:49.746129 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4tmqm" Feb 03 12:26:49 crc kubenswrapper[4820]: I0203 12:26:49.759870 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-brrn4" Feb 03 12:26:49 crc kubenswrapper[4820]: I0203 12:26:49.769427 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-t5mj4" Feb 03 12:26:49 crc kubenswrapper[4820]: I0203 12:26:49.960505 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss" podStartSLOduration=6.561682575 podStartE2EDuration="50.960482283s" podCreationTimestamp="2026-02-03 12:25:59 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.923200692 +0000 UTC m=+1280.446276566" lastFinishedPulling="2026-02-03 12:26:47.32200041 +0000 UTC m=+1324.845076274" observedRunningTime="2026-02-03 12:26:49.951288499 +0000 UTC m=+1327.474364383" watchObservedRunningTime="2026-02-03 12:26:49.960482283 +0000 UTC m=+1327.483558147" Feb 03 12:26:49 crc kubenswrapper[4820]: I0203 12:26:49.981401 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r" podStartSLOduration=6.752785221 podStartE2EDuration="51.981383239s" podCreationTimestamp="2026-02-03 12:25:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.092467476 +0000 UTC m=+1279.615543340" lastFinishedPulling="2026-02-03 12:26:47.321065494 +0000 UTC m=+1324.844141358" observedRunningTime="2026-02-03 12:26:49.979223832 +0000 UTC m=+1327.502299706" watchObservedRunningTime="2026-02-03 12:26:49.981383239 +0000 UTC m=+1327.504459103" Feb 03 12:26:50 crc kubenswrapper[4820]: I0203 12:26:50.115820 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-dr7hd" Feb 03 12:26:50 crc kubenswrapper[4820]: I0203 12:26:50.159923 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xw4mq" Feb 03 12:26:50 crc kubenswrapper[4820]: I0203 12:26:50.466508 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-qnh2k" podUID="cde1eaee-12a0-47f7-b88a-b1b97d0ed74b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.64:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:26:52 crc kubenswrapper[4820]: I0203 12:26:52.189445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9" Feb 03 12:26:52 crc kubenswrapper[4820]: I0203 12:26:52.690163 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" podUID="7ad36bba-9140-4660-b4ed-e873264c9e22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:26:58 crc kubenswrapper[4820]: I0203 12:26:58.960229 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-qnh2k" Feb 03 12:26:59 crc kubenswrapper[4820]: I0203 12:26:59.254137 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xkj2j" Feb 03 12:26:59 crc kubenswrapper[4820]: I0203 12:26:59.269872 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-z8jk7" Feb 03 12:26:59 crc kubenswrapper[4820]: I0203 12:26:59.276476 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-wsb7r" Feb 03 12:26:59 crc kubenswrapper[4820]: I0203 12:26:59.313027 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-5dmwb" Feb 03 12:26:59 crc kubenswrapper[4820]: I0203 12:26:59.383323 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6fw2d" Feb 03 12:26:59 crc kubenswrapper[4820]: I0203 12:26:59.457006 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-rdbrk" Feb 03 12:26:59 crc kubenswrapper[4820]: I0203 12:26:59.879293 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-5lxrd" Feb 03 12:27:00 crc kubenswrapper[4820]: I0203 12:27:00.553634 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6d49495bcf-pflss" Feb 03 12:27:01 crc kubenswrapper[4820]: I0203 12:27:01.497799 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:27:01 crc kubenswrapper[4820]: I0203 12:27:01.497857 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:27:01 crc kubenswrapper[4820]: I0203 12:27:01.915278 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl" event={"ID":"7a515408-dc44-4fba-bbe9-8b5f36fbc1d0","Type":"ContainerStarted","Data":"30685b0d82d2e6f3da409d08fdbe1d3a1dfb86a13f48d5a672f5c4335fc39f74"} Feb 03 12:27:01 crc kubenswrapper[4820]: I0203 12:27:01.915938 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl" Feb 03 12:27:01 crc kubenswrapper[4820]: I0203 12:27:01.925793 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq" event={"ID":"614c5412-875d-40b1-ad5f-445a941285af","Type":"ContainerStarted","Data":"f7d22c97518388c52f969c22561860d2db3e26aac5cddfec5e1f83feb74e6dcb"} Feb 03 12:27:01 crc kubenswrapper[4820]: I0203 12:27:01.926101 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq" Feb 03 12:27:01 crc kubenswrapper[4820]: I0203 12:27:01.933605 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6" event={"ID":"40fd2238-8148-4aa3-8f4e-54ffc1de0805","Type":"ContainerStarted","Data":"7de6da9baa80582b6d9356771867ad14341cdc1540e1f7587c2e6aac10ad2065"} Feb 03 12:27:01 crc kubenswrapper[4820]: I0203 12:27:01.933851 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6" Feb 03 12:27:01 crc kubenswrapper[4820]: I0203 12:27:01.950557 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl" podStartSLOduration=5.344819173 podStartE2EDuration="1m3.950538199s" podCreationTimestamp="2026-02-03 12:25:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.911497333 +0000 UTC m=+1280.434573197" lastFinishedPulling="2026-02-03 12:27:01.517216359 +0000 UTC m=+1339.040292223" observedRunningTime="2026-02-03 12:27:01.944080067 +0000 UTC m=+1339.467155931" watchObservedRunningTime="2026-02-03 12:27:01.950538199 +0000 UTC m=+1339.473614063" Feb 03 12:27:01 crc kubenswrapper[4820]: I0203 12:27:01.995721 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq" podStartSLOduration=5.390082238 podStartE2EDuration="1m3.995693759s" podCreationTimestamp="2026-02-03 12:25:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.911773292 +0000 UTC m=+1280.434849156" lastFinishedPulling="2026-02-03 12:27:01.517384703 +0000 UTC m=+1339.040460677" observedRunningTime="2026-02-03 12:27:01.982371176 +0000 UTC m=+1339.505447040" watchObservedRunningTime="2026-02-03 12:27:01.995693759 +0000 UTC m=+1339.518769623" Feb 03 12:27:02 crc kubenswrapper[4820]: I0203 12:27:02.014119 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6" podStartSLOduration=5.03194815 podStartE2EDuration="1m4.014085788s" podCreationTimestamp="2026-02-03 12:25:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.538949614 +0000 UTC m=+1280.062025468" lastFinishedPulling="2026-02-03 12:27:01.521087242 +0000 UTC m=+1339.044163106" observedRunningTime="2026-02-03 12:27:02.005512 +0000 UTC m=+1339.528587884" watchObservedRunningTime="2026-02-03 12:27:02.014085788 +0000 UTC m=+1339.537161662" Feb 03 12:27:02 crc kubenswrapper[4820]: I0203 12:27:02.692162 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" podUID="7ad36bba-9140-4660-b4ed-e873264c9e22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:27:02 crc kubenswrapper[4820]: I0203 12:27:02.943012 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" event={"ID":"1058185d-f11d-4a87-9fe6-005f60186329","Type":"ContainerStarted","Data":"43e36f38ac4050a8995326e13e34f90adc1ff82a2dbe1023a869fe27939ab4e1"} Feb 03 12:27:03 crc kubenswrapper[4820]: I0203 12:27:03.212303 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" podStartSLOduration=5.455617148 podStartE2EDuration="1m4.212266122s" podCreationTimestamp="2026-02-03 12:25:59 +0000 UTC" firstStartedPulling="2026-02-03 12:26:02.986327957 +0000 UTC m=+1280.509403821" lastFinishedPulling="2026-02-03 12:27:01.742976931 +0000 UTC m=+1339.266052795" observedRunningTime="2026-02-03 12:27:03.200384116 +0000 UTC m=+1340.723459980" watchObservedRunningTime="2026-02-03 12:27:03.212266122 +0000 UTC m=+1340.735341986" Feb 03 12:27:05 crc kubenswrapper[4820]: I0203 12:27:05.123081 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" event={"ID":"851ed64f-f147-45d0-a33b-eea29903ec0a","Type":"ContainerStarted","Data":"1345ade9fd5e05a6203021657cec6eef073609de1a0e6428b61eb1bf761aa497"} Feb 03 12:27:05 crc kubenswrapper[4820]: I0203 12:27:05.125408 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" Feb 03 12:27:05 crc kubenswrapper[4820]: I0203 12:27:05.257971 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" podStartSLOduration=5.138890556 podStartE2EDuration="1m6.257956206s" podCreationTimestamp="2026-02-03 12:25:59 +0000 UTC" firstStartedPulling="2026-02-03 12:26:03.003141993 +0000 UTC m=+1280.526217857" lastFinishedPulling="2026-02-03 12:27:04.122207643 +0000 UTC m=+1341.645283507" observedRunningTime="2026-02-03 12:27:05.255350127 +0000 UTC m=+1342.778425991" watchObservedRunningTime="2026-02-03 12:27:05.257956206 +0000 UTC m=+1342.781032070" Feb 03 12:27:09 crc kubenswrapper[4820]: I0203 12:27:09.422315 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9rprq" Feb 03 12:27:09 crc kubenswrapper[4820]: I0203 12:27:09.506301 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qnmp6" Feb 03 12:27:09 crc kubenswrapper[4820]: I0203 12:27:09.563867 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4fgnl" Feb 03 12:27:09 crc kubenswrapper[4820]: I0203 12:27:09.856181 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-x567r" Feb 03 12:27:10 crc kubenswrapper[4820]: I0203 12:27:10.341689 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" Feb 03 12:27:10 crc kubenswrapper[4820]: I0203 12:27:10.345141 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-22hg8" Feb 03 12:27:11 crc kubenswrapper[4820]: I0203 12:27:11.657815 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-22gr9" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.291741 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hx9l9"] Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.296913 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hx9l9" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.303942 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.304251 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.304417 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fwtmw" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.304589 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.307617 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hx9l9"] Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.345864 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj6vn\" (UniqueName: \"kubernetes.io/projected/48a59ce7-2c6f-4461-9391-9dca3f2bc630-kube-api-access-hj6vn\") pod \"dnsmasq-dns-675f4bcbfc-hx9l9\" (UID: \"48a59ce7-2c6f-4461-9391-9dca3f2bc630\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hx9l9" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.346118 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a59ce7-2c6f-4461-9391-9dca3f2bc630-config\") pod \"dnsmasq-dns-675f4bcbfc-hx9l9\" (UID: \"48a59ce7-2c6f-4461-9391-9dca3f2bc630\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hx9l9" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.409007 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mbwbv"] Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.410873 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.413203 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.448245 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a59ce7-2c6f-4461-9391-9dca3f2bc630-config\") pod \"dnsmasq-dns-675f4bcbfc-hx9l9\" (UID: \"48a59ce7-2c6f-4461-9391-9dca3f2bc630\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hx9l9" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.448443 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj6vn\" (UniqueName: \"kubernetes.io/projected/48a59ce7-2c6f-4461-9391-9dca3f2bc630-kube-api-access-hj6vn\") pod \"dnsmasq-dns-675f4bcbfc-hx9l9\" (UID: \"48a59ce7-2c6f-4461-9391-9dca3f2bc630\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hx9l9" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.450958 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a59ce7-2c6f-4461-9391-9dca3f2bc630-config\") pod \"dnsmasq-dns-675f4bcbfc-hx9l9\" (UID: \"48a59ce7-2c6f-4461-9391-9dca3f2bc630\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hx9l9" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.552065 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mbwbv\" (UID: \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.552718 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-config\") pod \"dnsmasq-dns-78dd6ddcc-mbwbv\" (UID: \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.553063 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbm2l\" (UniqueName: \"kubernetes.io/projected/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-kube-api-access-dbm2l\") pod \"dnsmasq-dns-78dd6ddcc-mbwbv\" (UID: \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.654690 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbm2l\" (UniqueName: \"kubernetes.io/projected/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-kube-api-access-dbm2l\") pod \"dnsmasq-dns-78dd6ddcc-mbwbv\" (UID: \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.654802 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mbwbv\" (UID: \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.654871 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-config\") pod \"dnsmasq-dns-78dd6ddcc-mbwbv\" (UID: \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.656271 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-config\") pod \"dnsmasq-dns-78dd6ddcc-mbwbv\" (UID: \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.657810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mbwbv\" (UID: \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.679098 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mbwbv"] Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.709944 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj6vn\" (UniqueName: \"kubernetes.io/projected/48a59ce7-2c6f-4461-9391-9dca3f2bc630-kube-api-access-hj6vn\") pod \"dnsmasq-dns-675f4bcbfc-hx9l9\" (UID: \"48a59ce7-2c6f-4461-9391-9dca3f2bc630\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hx9l9" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.711347 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbm2l\" (UniqueName: \"kubernetes.io/projected/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-kube-api-access-dbm2l\") pod \"dnsmasq-dns-78dd6ddcc-mbwbv\" (UID: \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.730284 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" Feb 03 12:27:29 crc kubenswrapper[4820]: I0203 12:27:29.927774 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hx9l9" Feb 03 12:27:30 crc kubenswrapper[4820]: I0203 12:27:30.941706 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mbwbv"] Feb 03 12:27:31 crc kubenswrapper[4820]: I0203 12:27:31.239478 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hx9l9"] Feb 03 12:27:31 crc kubenswrapper[4820]: W0203 12:27:31.246368 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48a59ce7_2c6f_4461_9391_9dca3f2bc630.slice/crio-d5ca5c4a3517cce4a5205dac7d9849e3613af1aa107d7e6c7b717474eec8d7e3 WatchSource:0}: Error finding container d5ca5c4a3517cce4a5205dac7d9849e3613af1aa107d7e6c7b717474eec8d7e3: Status 404 returned error can't find the container with id d5ca5c4a3517cce4a5205dac7d9849e3613af1aa107d7e6c7b717474eec8d7e3 Feb 03 12:27:31 crc kubenswrapper[4820]: I0203 12:27:31.366359 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:27:31 crc kubenswrapper[4820]: I0203 12:27:31.403372 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.010080 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" event={"ID":"e7ee7286-968c-48cc-a42e-0f7675b7cbc7","Type":"ContainerStarted","Data":"70cd428753674f9f22198e368076f35586e2ec5902cb2b9b4a19ed79b9968bbf"} Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.017709 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hx9l9" event={"ID":"48a59ce7-2c6f-4461-9391-9dca3f2bc630","Type":"ContainerStarted","Data":"d5ca5c4a3517cce4a5205dac7d9849e3613af1aa107d7e6c7b717474eec8d7e3"} Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.215418 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hx9l9"] Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.256127 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xj7bd"] Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.258160 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.307427 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xj7bd"] Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.320265 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584c5d07-7ab7-44c4-8ba9-9edf834d4912-config\") pod \"dnsmasq-dns-666b6646f7-xj7bd\" (UID: \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\") " pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.320327 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/584c5d07-7ab7-44c4-8ba9-9edf834d4912-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xj7bd\" (UID: \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\") " pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.320419 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7tgm\" (UniqueName: \"kubernetes.io/projected/584c5d07-7ab7-44c4-8ba9-9edf834d4912-kube-api-access-g7tgm\") pod \"dnsmasq-dns-666b6646f7-xj7bd\" (UID: \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\") " pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.423684 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584c5d07-7ab7-44c4-8ba9-9edf834d4912-config\") pod \"dnsmasq-dns-666b6646f7-xj7bd\" (UID: \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\") " pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.423744 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/584c5d07-7ab7-44c4-8ba9-9edf834d4912-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xj7bd\" (UID: \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\") " pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.423904 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7tgm\" (UniqueName: \"kubernetes.io/projected/584c5d07-7ab7-44c4-8ba9-9edf834d4912-kube-api-access-g7tgm\") pod \"dnsmasq-dns-666b6646f7-xj7bd\" (UID: \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\") " pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.424974 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/584c5d07-7ab7-44c4-8ba9-9edf834d4912-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xj7bd\" (UID: \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\") " pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.425013 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584c5d07-7ab7-44c4-8ba9-9edf834d4912-config\") pod \"dnsmasq-dns-666b6646f7-xj7bd\" (UID: \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\") " pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.445792 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7tgm\" (UniqueName: \"kubernetes.io/projected/584c5d07-7ab7-44c4-8ba9-9edf834d4912-kube-api-access-g7tgm\") pod \"dnsmasq-dns-666b6646f7-xj7bd\" (UID: \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\") " pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" Feb 03 12:27:32 crc kubenswrapper[4820]: I0203 12:27:32.587636 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.012915 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mbwbv"] Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.038816 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gthq5"] Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.046567 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.076218 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gthq5"] Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.085314 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5deb308f-ef2d-477f-a5ac-04055ea9b76f-config\") pod \"dnsmasq-dns-57d769cc4f-gthq5\" (UID: \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.085360 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5deb308f-ef2d-477f-a5ac-04055ea9b76f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gthq5\" (UID: \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.085423 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5f6w\" (UniqueName: \"kubernetes.io/projected/5deb308f-ef2d-477f-a5ac-04055ea9b76f-kube-api-access-h5f6w\") pod \"dnsmasq-dns-57d769cc4f-gthq5\" (UID: \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.188655 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5f6w\" (UniqueName: \"kubernetes.io/projected/5deb308f-ef2d-477f-a5ac-04055ea9b76f-kube-api-access-h5f6w\") pod \"dnsmasq-dns-57d769cc4f-gthq5\" (UID: \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.189194 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5deb308f-ef2d-477f-a5ac-04055ea9b76f-config\") pod \"dnsmasq-dns-57d769cc4f-gthq5\" (UID: \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.189230 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5deb308f-ef2d-477f-a5ac-04055ea9b76f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gthq5\" (UID: \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.190747 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5deb308f-ef2d-477f-a5ac-04055ea9b76f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gthq5\" (UID: \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.190992 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5deb308f-ef2d-477f-a5ac-04055ea9b76f-config\") pod \"dnsmasq-dns-57d769cc4f-gthq5\" (UID: \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.236001 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5f6w\" (UniqueName: \"kubernetes.io/projected/5deb308f-ef2d-477f-a5ac-04055ea9b76f-kube-api-access-h5f6w\") pod \"dnsmasq-dns-57d769cc4f-gthq5\" (UID: \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.403493 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.632983 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.634996 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.641662 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.642031 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.642101 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.642218 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.649282 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6q5vv" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.649668 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.680096 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.681605 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.794151 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.798129 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-config-data\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.798205 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.798247 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.798282 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.798366 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wpg2\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-kube-api-access-6wpg2\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.798414 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.798474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.798512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18ae976d-57fb-4c6e-8f3d-af9748d3058a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.798528 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.798574 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18ae976d-57fb-4c6e-8f3d-af9748d3058a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.884665 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xj7bd"] Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.900637 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.900744 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.900778 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18ae976d-57fb-4c6e-8f3d-af9748d3058a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.900798 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.900840 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18ae976d-57fb-4c6e-8f3d-af9748d3058a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.900873 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.900914 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-config-data\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.900957 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.900988 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.901017 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.901066 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wpg2\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-kube-api-access-6wpg2\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.901998 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.902389 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.902800 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-config-data\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.920296 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.929230 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.929571 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.931432 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.939704 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18ae976d-57fb-4c6e-8f3d-af9748d3058a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.944179 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18ae976d-57fb-4c6e-8f3d-af9748d3058a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.949611 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.974238 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wpg2\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-kube-api-access-6wpg2\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:33 crc kubenswrapper[4820]: I0203 12:27:33.998445 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " pod="openstack/rabbitmq-server-0" Feb 03 12:27:34 crc kubenswrapper[4820]: I0203 12:27:34.119729 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 12:27:34 crc kubenswrapper[4820]: I0203 12:27:34.263031 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" event={"ID":"584c5d07-7ab7-44c4-8ba9-9edf834d4912","Type":"ContainerStarted","Data":"4e3445c846843aa96c1ea7967c594db586a51ed56232ab408f37df2a860ff5c3"} Feb 03 12:27:34 crc kubenswrapper[4820]: I0203 12:27:34.784486 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gthq5"] Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.003641 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.005521 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.011510 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.012461 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.012582 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.013140 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-cw5lr" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.023854 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.044808 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.095688 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hplm6\" (UniqueName: \"kubernetes.io/projected/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-kube-api-access-hplm6\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.095839 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-kolla-config\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.095928 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.096002 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.096026 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.096046 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.096135 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.096200 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-config-data-default\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.197965 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hplm6\" (UniqueName: \"kubernetes.io/projected/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-kube-api-access-hplm6\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.198026 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-kolla-config\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.198063 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.198112 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.198160 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.198188 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.198242 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.198278 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-config-data-default\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.200016 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.201289 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-kolla-config\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.205373 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.214189 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-config-data-default\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.220740 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.221122 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.225527 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.233219 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hplm6\" (UniqueName: \"kubernetes.io/projected/e8e46f8a-5de0-457f-b8eb-f76e8902e8ab-kube-api-access-hplm6\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.236776 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.245441 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab\") " pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.264852 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" event={"ID":"5deb308f-ef2d-477f-a5ac-04055ea9b76f","Type":"ContainerStarted","Data":"736d9430272ec9b86c239268d47eb9a2e0c201a2d775f1e25b47dc907b1129ec"} Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.267879 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18ae976d-57fb-4c6e-8f3d-af9748d3058a","Type":"ContainerStarted","Data":"0ec9aa4fbf3266740919ca7ff7726b09f4b2eed9b434942733dc3eee4f3cc140"} Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.348371 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.402080 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.405324 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.417359 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jmczl" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.419204 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.419464 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.419754 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.420081 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.420395 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.435191 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.527539 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.541401 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62eb6ec6-669b-476d-929f-919b7f533a5a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.541520 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.541566 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.541613 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9bzp\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-kube-api-access-k9bzp\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.541703 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62eb6ec6-669b-476d-929f-919b7f533a5a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.541735 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.541763 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.541791 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.541813 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.541833 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.541899 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.643926 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62eb6ec6-669b-476d-929f-919b7f533a5a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.644078 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.644107 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.644420 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.645221 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.648111 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9bzp\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-kube-api-access-k9bzp\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.648175 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62eb6ec6-669b-476d-929f-919b7f533a5a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.649115 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62eb6ec6-669b-476d-929f-919b7f533a5a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.650476 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.652245 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.652328 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.652375 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.653330 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.653714 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.654956 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.652393 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.660880 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.660930 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.663240 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62eb6ec6-669b-476d-929f-919b7f533a5a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.671835 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9bzp\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-kube-api-access-k9bzp\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.672051 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.672785 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.685875 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:35 crc kubenswrapper[4820]: I0203 12:27:35.748086 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.652610 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.675881 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.689219 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.690905 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.691850 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.715179 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vrn9q" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.750739 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.806869 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.808386 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.825475 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-9s6xb" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.825759 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.826124 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.867561 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.882642 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1e865214-494f-4a49-a2e6-2b7316f30a92-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.882703 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e865214-494f-4a49-a2e6-2b7316f30a92-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.882792 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1e865214-494f-4a49-a2e6-2b7316f30a92-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.882839 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8h7s\" (UniqueName: \"kubernetes.io/projected/1e865214-494f-4a49-a2e6-2b7316f30a92-kube-api-access-k8h7s\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.882914 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.882985 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1e865214-494f-4a49-a2e6-2b7316f30a92-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.883367 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e865214-494f-4a49-a2e6-2b7316f30a92-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.883449 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e865214-494f-4a49-a2e6-2b7316f30a92-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.986031 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-kolla-config\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.986105 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1e865214-494f-4a49-a2e6-2b7316f30a92-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.986162 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.986202 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e865214-494f-4a49-a2e6-2b7316f30a92-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.986235 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-config-data\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.986265 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e865214-494f-4a49-a2e6-2b7316f30a92-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.986295 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.986334 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1e865214-494f-4a49-a2e6-2b7316f30a92-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.986362 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e865214-494f-4a49-a2e6-2b7316f30a92-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.986385 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvnzb\" (UniqueName: \"kubernetes.io/projected/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-kube-api-access-mvnzb\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.986438 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1e865214-494f-4a49-a2e6-2b7316f30a92-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.986471 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8h7s\" (UniqueName: \"kubernetes.io/projected/1e865214-494f-4a49-a2e6-2b7316f30a92-kube-api-access-k8h7s\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.986517 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.986789 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:36 crc kubenswrapper[4820]: I0203 12:27:36.987248 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1e865214-494f-4a49-a2e6-2b7316f30a92-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.012105 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1e865214-494f-4a49-a2e6-2b7316f30a92-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.023849 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1e865214-494f-4a49-a2e6-2b7316f30a92-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.030997 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e865214-494f-4a49-a2e6-2b7316f30a92-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.073443 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e865214-494f-4a49-a2e6-2b7316f30a92-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.145754 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e865214-494f-4a49-a2e6-2b7316f30a92-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.146168 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-kolla-config\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.146285 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.146360 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-config-data\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.146408 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.146481 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvnzb\" (UniqueName: \"kubernetes.io/projected/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-kube-api-access-mvnzb\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.154126 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-kolla-config\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.216089 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-config-data\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.217521 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8h7s\" (UniqueName: \"kubernetes.io/projected/1e865214-494f-4a49-a2e6-2b7316f30a92-kube-api-access-k8h7s\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.221724 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.230852 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.242760 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1e865214-494f-4a49-a2e6-2b7316f30a92\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.245327 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvnzb\" (UniqueName: \"kubernetes.io/projected/ace9a08e-e106-4d85-ae21-3d7d6ea60dff-kube-api-access-mvnzb\") pod \"memcached-0\" (UID: \"ace9a08e-e106-4d85-ae21-3d7d6ea60dff\") " pod="openstack/memcached-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.270216 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.330426 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.444154 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.461844 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.925444 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"62eb6ec6-669b-476d-929f-919b7f533a5a","Type":"ContainerStarted","Data":"60cb45ce8b0ac8b7a5df2126023b00d100c70dae307674c4acd2bc7c0b89995c"} Feb 03 12:27:37 crc kubenswrapper[4820]: I0203 12:27:37.980079 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab","Type":"ContainerStarted","Data":"50341b8861596fb77107f6027a46a9c1a1c4c8664af6c98d6c014fbc59a1a359"} Feb 03 12:27:38 crc kubenswrapper[4820]: I0203 12:27:38.111041 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 12:27:38 crc kubenswrapper[4820]: I0203 12:27:38.301587 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 03 12:27:39 crc kubenswrapper[4820]: I0203 12:27:39.003399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ace9a08e-e106-4d85-ae21-3d7d6ea60dff","Type":"ContainerStarted","Data":"86a3d0bf6a3c8bc195144fb5a94165121b9fb0a9dc3a50b5f8d2da0e29d01e10"} Feb 03 12:27:39 crc kubenswrapper[4820]: I0203 12:27:39.010664 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1e865214-494f-4a49-a2e6-2b7316f30a92","Type":"ContainerStarted","Data":"a2119ab1e6823a294fe72316ec1f98563f3ef8b8b78a8acbbd265976e0d3a0f6"} Feb 03 12:27:39 crc kubenswrapper[4820]: I0203 12:27:39.771829 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:27:39 crc kubenswrapper[4820]: I0203 12:27:39.774176 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 12:27:39 crc kubenswrapper[4820]: I0203 12:27:39.779720 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-2sqjf" Feb 03 12:27:39 crc kubenswrapper[4820]: I0203 12:27:39.809223 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:27:39 crc kubenswrapper[4820]: I0203 12:27:39.855812 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdsm6\" (UniqueName: \"kubernetes.io/projected/2ae1a10e-b84f-4533-940c-0688f69fae7c-kube-api-access-xdsm6\") pod \"kube-state-metrics-0\" (UID: \"2ae1a10e-b84f-4533-940c-0688f69fae7c\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:39 crc kubenswrapper[4820]: I0203 12:27:39.957466 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdsm6\" (UniqueName: \"kubernetes.io/projected/2ae1a10e-b84f-4533-940c-0688f69fae7c-kube-api-access-xdsm6\") pod \"kube-state-metrics-0\" (UID: \"2ae1a10e-b84f-4533-940c-0688f69fae7c\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:40 crc kubenswrapper[4820]: I0203 12:27:40.032219 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdsm6\" (UniqueName: \"kubernetes.io/projected/2ae1a10e-b84f-4533-940c-0688f69fae7c-kube-api-access-xdsm6\") pod \"kube-state-metrics-0\" (UID: \"2ae1a10e-b84f-4533-940c-0688f69fae7c\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:40 crc kubenswrapper[4820]: I0203 12:27:40.111277 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 12:27:40 crc kubenswrapper[4820]: I0203 12:27:40.983227 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-96p5d"] Feb 03 12:27:40 crc kubenswrapper[4820]: I0203 12:27:40.987986 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.010867 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.012791 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-xcmv2" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.013110 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.091856 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-kk5zn"] Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.130365 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.189400 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-var-run-ovn\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.189465 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-scripts\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.189507 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-ovn-controller-tls-certs\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.189536 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8mt8\" (UniqueName: \"kubernetes.io/projected/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-kube-api-access-t8mt8\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.189567 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-combined-ca-bundle\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.189617 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-var-log-ovn\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.189648 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-var-run\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.217814 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96p5d"] Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.217862 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kk5zn"] Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.220513 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.296346 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-var-log\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.296752 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-etc-ovs\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.296781 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr2gb\" (UniqueName: \"kubernetes.io/projected/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-kube-api-access-lr2gb\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.296842 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-var-run-ovn\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.296907 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-scripts\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.296948 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-ovn-controller-tls-certs\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.296971 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8mt8\" (UniqueName: \"kubernetes.io/projected/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-kube-api-access-t8mt8\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.297015 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-var-lib\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.297050 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-combined-ca-bundle\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.297146 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-var-run\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.297182 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-scripts\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.301491 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-var-run-ovn\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.303349 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-var-log-ovn\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.303448 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-var-run\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.304597 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-var-log-ovn\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.308366 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-scripts\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.308873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-var-run\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.309036 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.314185 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.322863 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7hkds" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.323369 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.334196 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-combined-ca-bundle\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.336618 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.338589 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.338775 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.338928 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.340704 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.341100 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.345245 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-ovn-controller-tls-certs\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.345419 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8mt8\" (UniqueName: \"kubernetes.io/projected/b3b01895-53e1-4391-8d1e-8f2458d4f2e0-kube-api-access-t8mt8\") pod \"ovn-controller-96p5d\" (UID: \"b3b01895-53e1-4391-8d1e-8f2458d4f2e0\") " pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.351838 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.367939 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96p5d" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.405496 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-var-log\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.405547 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-etc-ovs\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.405588 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr2gb\" (UniqueName: \"kubernetes.io/projected/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-kube-api-access-lr2gb\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.405631 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-var-lib\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.405670 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-var-run\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.405691 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-scripts\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.406580 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-var-log\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.406792 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-var-lib\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.406856 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-var-run\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.407046 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-etc-ovs\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.408383 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-scripts\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.445916 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr2gb\" (UniqueName: \"kubernetes.io/projected/7fd50209-6464-4ba1-a7f9-ff9a38317ff2-kube-api-access-lr2gb\") pod \"ovn-controller-ovs-kk5zn\" (UID: \"7fd50209-6464-4ba1-a7f9-ff9a38317ff2\") " pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.508434 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.510236 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.510469 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.510581 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cnmd\" (UniqueName: \"kubernetes.io/projected/7b4c739c-d87f-478c-aec7-07a49da53d46-kube-api-access-5cnmd\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.510731 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.510833 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b4c739c-d87f-478c-aec7-07a49da53d46-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.510960 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.511113 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.511258 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.511449 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b4c739c-d87f-478c-aec7-07a49da53d46-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.527078 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.619679 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.619786 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.619827 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cnmd\" (UniqueName: \"kubernetes.io/projected/7b4c739c-d87f-478c-aec7-07a49da53d46-kube-api-access-5cnmd\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.619858 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.619919 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b4c739c-d87f-478c-aec7-07a49da53d46-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.619955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.620414 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.620486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.620537 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b4c739c-d87f-478c-aec7-07a49da53d46-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.620582 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.625463 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.631372 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.638529 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.640978 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.642772 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.643290 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b4c739c-d87f-478c-aec7-07a49da53d46-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.645360 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b4c739c-d87f-478c-aec7-07a49da53d46-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.645819 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.660858 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cnmd\" (UniqueName: \"kubernetes.io/projected/7b4c739c-d87f-478c-aec7-07a49da53d46-kube-api-access-5cnmd\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.670902 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.671015 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f3f5a1a6665956e69b060824525b6e14f682a7b73f5e11dfb7e9e70ac872e663/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:41 crc kubenswrapper[4820]: I0203 12:27:41.766407 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") pod \"prometheus-metric-storage-0\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.042043 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.244286 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2ae1a10e-b84f-4533-940c-0688f69fae7c","Type":"ContainerStarted","Data":"dbf87e01d472aecc62ac1d3e5903bbb025c5c5a06ccc72c52295f86e933a2fb9"} Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.303598 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96p5d"] Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.792308 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.794097 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.803853 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.804734 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.804947 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-4lvlq" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.804978 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.808474 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.840535 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.854732 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.854796 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.854827 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.855400 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.855450 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.855510 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnfhh\" (UniqueName: \"kubernetes.io/projected/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-kube-api-access-lnfhh\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.855587 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.855630 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-config\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.940878 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kk5zn"] Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.961301 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.962540 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.962577 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.962655 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnfhh\" (UniqueName: \"kubernetes.io/projected/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-kube-api-access-lnfhh\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.962722 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.962752 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-config\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.962945 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.962986 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.964310 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.964701 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.970276 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-config\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.971966 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.989975 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.991014 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.992012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:42 crc kubenswrapper[4820]: I0203 12:27:42.992149 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnfhh\" (UniqueName: \"kubernetes.io/projected/f936af63-a86d-4dc6-aa17-59e2e2b69f5b-kube-api-access-lnfhh\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:43 crc kubenswrapper[4820]: I0203 12:27:43.013843 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:27:43 crc kubenswrapper[4820]: I0203 12:27:43.030670 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f936af63-a86d-4dc6-aa17-59e2e2b69f5b\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:43 crc kubenswrapper[4820]: I0203 12:27:43.154978 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-4lvlq" Feb 03 12:27:43 crc kubenswrapper[4820]: I0203 12:27:43.160525 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 03 12:27:43 crc kubenswrapper[4820]: W0203 12:27:43.290471 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b4c739c_d87f_478c_aec7_07a49da53d46.slice/crio-5a62058bb2d742ff3950a86fbea0cfa431b4e4363dce8504c20e35118d6654b3 WatchSource:0}: Error finding container 5a62058bb2d742ff3950a86fbea0cfa431b4e4363dce8504c20e35118d6654b3: Status 404 returned error can't find the container with id 5a62058bb2d742ff3950a86fbea0cfa431b4e4363dce8504c20e35118d6654b3 Feb 03 12:27:43 crc kubenswrapper[4820]: W0203 12:27:43.291222 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fd50209_6464_4ba1_a7f9_ff9a38317ff2.slice/crio-4483317b8f10209e7aefa7fd07468a87aac80a72285b273a28f261f8545e1b18 WatchSource:0}: Error finding container 4483317b8f10209e7aefa7fd07468a87aac80a72285b273a28f261f8545e1b18: Status 404 returned error can't find the container with id 4483317b8f10209e7aefa7fd07468a87aac80a72285b273a28f261f8545e1b18 Feb 03 12:27:43 crc kubenswrapper[4820]: I0203 12:27:43.322744 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96p5d" event={"ID":"b3b01895-53e1-4391-8d1e-8f2458d4f2e0","Type":"ContainerStarted","Data":"9f7ca11daf0577c2f58f0d711be6f138f7897fd6ec1ddfd6fa62954b401b8a45"} Feb 03 12:27:43 crc kubenswrapper[4820]: E0203 12:27:43.337030 4820 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fd50209_6464_4ba1_a7f9_ff9a38317ff2.slice/crio-4483317b8f10209e7aefa7fd07468a87aac80a72285b273a28f261f8545e1b18: Error finding container 4483317b8f10209e7aefa7fd07468a87aac80a72285b273a28f261f8545e1b18: Status 404 returned error can't find the container with id 4483317b8f10209e7aefa7fd07468a87aac80a72285b273a28f261f8545e1b18 Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.401668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kk5zn" event={"ID":"7fd50209-6464-4ba1-a7f9-ff9a38317ff2","Type":"ContainerStarted","Data":"4483317b8f10209e7aefa7fd07468a87aac80a72285b273a28f261f8545e1b18"} Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.406004 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b4c739c-d87f-478c-aec7-07a49da53d46","Type":"ContainerStarted","Data":"5a62058bb2d742ff3950a86fbea0cfa431b4e4363dce8504c20e35118d6654b3"} Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.602167 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-lrcd2"] Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.603677 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.606863 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.636078 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lrcd2"] Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.744172 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1a16d012-2c9a-452a-9a18-8d016793a7f6-ovs-rundir\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.744263 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a16d012-2c9a-452a-9a18-8d016793a7f6-config\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.744292 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a16d012-2c9a-452a-9a18-8d016793a7f6-combined-ca-bundle\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.744314 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wd5p\" (UniqueName: \"kubernetes.io/projected/1a16d012-2c9a-452a-9a18-8d016793a7f6-kube-api-access-5wd5p\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.744346 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1a16d012-2c9a-452a-9a18-8d016793a7f6-ovn-rundir\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.744570 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a16d012-2c9a-452a-9a18-8d016793a7f6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.847254 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a16d012-2c9a-452a-9a18-8d016793a7f6-combined-ca-bundle\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.847605 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wd5p\" (UniqueName: \"kubernetes.io/projected/1a16d012-2c9a-452a-9a18-8d016793a7f6-kube-api-access-5wd5p\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.847652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1a16d012-2c9a-452a-9a18-8d016793a7f6-ovn-rundir\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.847755 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a16d012-2c9a-452a-9a18-8d016793a7f6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.847797 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1a16d012-2c9a-452a-9a18-8d016793a7f6-ovs-rundir\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.847860 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a16d012-2c9a-452a-9a18-8d016793a7f6-config\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.848543 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a16d012-2c9a-452a-9a18-8d016793a7f6-config\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.849018 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1a16d012-2c9a-452a-9a18-8d016793a7f6-ovs-rundir\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.849066 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1a16d012-2c9a-452a-9a18-8d016793a7f6-ovn-rundir\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.855054 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a16d012-2c9a-452a-9a18-8d016793a7f6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.871992 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wd5p\" (UniqueName: \"kubernetes.io/projected/1a16d012-2c9a-452a-9a18-8d016793a7f6-kube-api-access-5wd5p\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.877743 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a16d012-2c9a-452a-9a18-8d016793a7f6-combined-ca-bundle\") pod \"ovn-controller-metrics-lrcd2\" (UID: \"1a16d012-2c9a-452a-9a18-8d016793a7f6\") " pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:44 crc kubenswrapper[4820]: I0203 12:27:44.967071 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lrcd2" Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.769258 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.774866 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.805318 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.805733 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bfdnl" Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.806407 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.806528 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.820024 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.894136 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9c7327b-374e-4a6f-a5c7-23136aea36b8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.894199 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c7327b-374e-4a6f-a5c7-23136aea36b8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.894222 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbbqq\" (UniqueName: \"kubernetes.io/projected/c9c7327b-374e-4a6f-a5c7-23136aea36b8-kube-api-access-xbbqq\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.894249 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9c7327b-374e-4a6f-a5c7-23136aea36b8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.894311 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c7327b-374e-4a6f-a5c7-23136aea36b8-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.894337 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c7327b-374e-4a6f-a5c7-23136aea36b8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.894398 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c7327b-374e-4a6f-a5c7-23136aea36b8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.894429 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:45 crc kubenswrapper[4820]: I0203 12:27:45.999555 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c7327b-374e-4a6f-a5c7-23136aea36b8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:45.999955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.000055 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9c7327b-374e-4a6f-a5c7-23136aea36b8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.000133 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbbqq\" (UniqueName: \"kubernetes.io/projected/c9c7327b-374e-4a6f-a5c7-23136aea36b8-kube-api-access-xbbqq\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.000159 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c7327b-374e-4a6f-a5c7-23136aea36b8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.000799 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.001489 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9c7327b-374e-4a6f-a5c7-23136aea36b8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.005242 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9c7327b-374e-4a6f-a5c7-23136aea36b8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.006664 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9c7327b-374e-4a6f-a5c7-23136aea36b8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.006690 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c7327b-374e-4a6f-a5c7-23136aea36b8-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.006974 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c7327b-374e-4a6f-a5c7-23136aea36b8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.007276 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c7327b-374e-4a6f-a5c7-23136aea36b8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.007565 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c7327b-374e-4a6f-a5c7-23136aea36b8-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.009944 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c7327b-374e-4a6f-a5c7-23136aea36b8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.012451 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c7327b-374e-4a6f-a5c7-23136aea36b8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.027007 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbbqq\" (UniqueName: \"kubernetes.io/projected/c9c7327b-374e-4a6f-a5c7-23136aea36b8-kube-api-access-xbbqq\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.040181 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9c7327b-374e-4a6f-a5c7-23136aea36b8\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:46 crc kubenswrapper[4820]: I0203 12:27:46.114158 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 03 12:27:47 crc kubenswrapper[4820]: I0203 12:27:47.677246 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 12:27:54 crc kubenswrapper[4820]: I0203 12:27:54.792235 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f936af63-a86d-4dc6-aa17-59e2e2b69f5b","Type":"ContainerStarted","Data":"d60f55068be659b94f23cd072ecbd749dfa9ba0ecd289002c7782d437e3ead9d"} Feb 03 12:27:59 crc kubenswrapper[4820]: E0203 12:27:59.275204 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 03 12:27:59 crc kubenswrapper[4820]: E0203 12:27:59.276353 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9bzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(62eb6ec6-669b-476d-929f-919b7f533a5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:27:59 crc kubenswrapper[4820]: E0203 12:27:59.282023 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="62eb6ec6-669b-476d-929f-919b7f533a5a" Feb 03 12:28:00 crc kubenswrapper[4820]: E0203 12:28:00.154702 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="62eb6ec6-669b-476d-929f-919b7f533a5a" Feb 03 12:28:01 crc kubenswrapper[4820]: I0203 12:28:01.384209 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:28:01 crc kubenswrapper[4820]: I0203 12:28:01.385181 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:28:01 crc kubenswrapper[4820]: I0203 12:28:01.385326 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:28:01 crc kubenswrapper[4820]: I0203 12:28:01.388309 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5b6fb38e3a772864bd8a30bd0acd2c8340ca496b3ae218013d45718e5286b56"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:28:01 crc kubenswrapper[4820]: I0203 12:28:01.388706 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://f5b6fb38e3a772864bd8a30bd0acd2c8340ca496b3ae218013d45718e5286b56" gracePeriod=600 Feb 03 12:28:02 crc kubenswrapper[4820]: I0203 12:28:02.173580 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="f5b6fb38e3a772864bd8a30bd0acd2c8340ca496b3ae218013d45718e5286b56" exitCode=0 Feb 03 12:28:02 crc kubenswrapper[4820]: I0203 12:28:02.173961 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"f5b6fb38e3a772864bd8a30bd0acd2c8340ca496b3ae218013d45718e5286b56"} Feb 03 12:28:02 crc kubenswrapper[4820]: I0203 12:28:02.173996 4820 scope.go:117] "RemoveContainer" containerID="18df95791bb9a7f437d7d4ad2b5b03a9b5d2686ac3fa57d763f146b5d1397b25" Feb 03 12:28:12 crc kubenswrapper[4820]: E0203 12:28:12.261553 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 03 12:28:12 crc kubenswrapper[4820]: E0203 12:28:12.262343 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8h7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(1e865214-494f-4a49-a2e6-2b7316f30a92): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:28:12 crc kubenswrapper[4820]: E0203 12:28:12.263942 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="1e865214-494f-4a49-a2e6-2b7316f30a92" Feb 03 12:28:12 crc kubenswrapper[4820]: E0203 12:28:12.322734 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 03 12:28:12 crc kubenswrapper[4820]: E0203 12:28:12.323002 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hplm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(e8e46f8a-5de0-457f-b8eb-f76e8902e8ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:28:12 crc kubenswrapper[4820]: E0203 12:28:12.324200 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="e8e46f8a-5de0-457f-b8eb-f76e8902e8ab" Feb 03 12:28:12 crc kubenswrapper[4820]: E0203 12:28:12.765398 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="e8e46f8a-5de0-457f-b8eb-f76e8902e8ab" Feb 03 12:28:12 crc kubenswrapper[4820]: E0203 12:28:12.765663 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="1e865214-494f-4a49-a2e6-2b7316f30a92" Feb 03 12:28:13 crc kubenswrapper[4820]: I0203 12:28:13.108979 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 12:28:13 crc kubenswrapper[4820]: E0203 12:28:13.538543 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 03 12:28:13 crc kubenswrapper[4820]: E0203 12:28:13.539224 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5d8h585h67h99h7bh56bhb4hcfh84h5f6h576h696h5c5h5f4h7bhch686h55h564h59dhd8h9dh55h546h6bh5bdh5dch79h5c8h9ch9ch67bq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvnzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(ace9a08e-e106-4d85-ae21-3d7d6ea60dff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:28:13 crc kubenswrapper[4820]: E0203 12:28:13.540502 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="ace9a08e-e106-4d85-ae21-3d7d6ea60dff" Feb 03 12:28:13 crc kubenswrapper[4820]: E0203 12:28:13.848963 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="ace9a08e-e106-4d85-ae21-3d7d6ea60dff" Feb 03 12:28:14 crc kubenswrapper[4820]: I0203 12:28:14.206549 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lrcd2"] Feb 03 12:28:19 crc kubenswrapper[4820]: W0203 12:28:19.653912 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9c7327b_374e_4a6f_a5c7_23136aea36b8.slice/crio-5a416adb6a3b6249a21aae0b443330e21c3e38871bbb96d1e5667ce58b7cb537 WatchSource:0}: Error finding container 5a416adb6a3b6249a21aae0b443330e21c3e38871bbb96d1e5667ce58b7cb537: Status 404 returned error can't find the container with id 5a416adb6a3b6249a21aae0b443330e21c3e38871bbb96d1e5667ce58b7cb537 Feb 03 12:28:19 crc kubenswrapper[4820]: E0203 12:28:19.677308 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Feb 03 12:28:19 crc kubenswrapper[4820]: E0203 12:28:19.677657 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nch54bh67dh655h97h596hf4h56bh657h5d5hb5h9dh58bh647h64chd7h54dh57bh9fh66dh57h574hfch56dhffh5bch67h8fh5d8h54ch645h59bq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8mt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-96p5d_openstack(b3b01895-53e1-4391-8d1e-8f2458d4f2e0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:28:19 crc kubenswrapper[4820]: E0203 12:28:19.678935 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-96p5d" podUID="b3b01895-53e1-4391-8d1e-8f2458d4f2e0" Feb 03 12:28:20 crc kubenswrapper[4820]: I0203 12:28:20.202652 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9c7327b-374e-4a6f-a5c7-23136aea36b8","Type":"ContainerStarted","Data":"5a416adb6a3b6249a21aae0b443330e21c3e38871bbb96d1e5667ce58b7cb537"} Feb 03 12:28:20 crc kubenswrapper[4820]: I0203 12:28:20.203835 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lrcd2" event={"ID":"1a16d012-2c9a-452a-9a18-8d016793a7f6","Type":"ContainerStarted","Data":"34e19cd4643c102703ada8c0e6bb56165e06ab54c5137e495c0c7156c7594623"} Feb 03 12:28:20 crc kubenswrapper[4820]: E0203 12:28:20.206040 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-96p5d" podUID="b3b01895-53e1-4391-8d1e-8f2458d4f2e0" Feb 03 12:28:20 crc kubenswrapper[4820]: E0203 12:28:20.718162 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 12:28:20 crc kubenswrapper[4820]: E0203 12:28:20.718446 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dbm2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-mbwbv_openstack(e7ee7286-968c-48cc-a42e-0f7675b7cbc7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:28:20 crc kubenswrapper[4820]: E0203 12:28:20.720195 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" podUID="e7ee7286-968c-48cc-a42e-0f7675b7cbc7" Feb 03 12:28:20 crc kubenswrapper[4820]: E0203 12:28:20.722009 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 12:28:20 crc kubenswrapper[4820]: E0203 12:28:20.722118 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7tgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-xj7bd_openstack(584c5d07-7ab7-44c4-8ba9-9edf834d4912): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:28:20 crc kubenswrapper[4820]: E0203 12:28:20.723995 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" podUID="584c5d07-7ab7-44c4-8ba9-9edf834d4912" Feb 03 12:28:20 crc kubenswrapper[4820]: E0203 12:28:20.727785 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 12:28:20 crc kubenswrapper[4820]: E0203 12:28:20.728476 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hj6vn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-hx9l9_openstack(48a59ce7-2c6f-4461-9391-9dca3f2bc630): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:28:20 crc kubenswrapper[4820]: E0203 12:28:20.729684 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-hx9l9" podUID="48a59ce7-2c6f-4461-9391-9dca3f2bc630" Feb 03 12:28:20 crc kubenswrapper[4820]: E0203 12:28:20.851423 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 12:28:20 crc kubenswrapper[4820]: E0203 12:28:20.851662 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5f6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-gthq5_openstack(5deb308f-ef2d-477f-a5ac-04055ea9b76f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:28:20 crc kubenswrapper[4820]: E0203 12:28:20.853023 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" podUID="5deb308f-ef2d-477f-a5ac-04055ea9b76f" Feb 03 12:28:21 crc kubenswrapper[4820]: E0203 12:28:21.226085 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" podUID="584c5d07-7ab7-44c4-8ba9-9edf834d4912" Feb 03 12:28:21 crc kubenswrapper[4820]: E0203 12:28:21.226258 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" podUID="5deb308f-ef2d-477f-a5ac-04055ea9b76f" Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.306172 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3"} Feb 03 12:28:22 crc kubenswrapper[4820]: E0203 12:28:22.547432 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 03 12:28:22 crc kubenswrapper[4820]: E0203 12:28:22.547504 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 03 12:28:22 crc kubenswrapper[4820]: E0203 12:28:22.547683 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdsm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(2ae1a10e-b84f-4533-940c-0688f69fae7c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:28:22 crc kubenswrapper[4820]: E0203 12:28:22.549341 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="2ae1a10e-b84f-4533-940c-0688f69fae7c" Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.626933 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hx9l9" Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.636882 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.705440 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a59ce7-2c6f-4461-9391-9dca3f2bc630-config\") pod \"48a59ce7-2c6f-4461-9391-9dca3f2bc630\" (UID: \"48a59ce7-2c6f-4461-9391-9dca3f2bc630\") " Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.705592 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbm2l\" (UniqueName: \"kubernetes.io/projected/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-kube-api-access-dbm2l\") pod \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\" (UID: \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\") " Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.705669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj6vn\" (UniqueName: \"kubernetes.io/projected/48a59ce7-2c6f-4461-9391-9dca3f2bc630-kube-api-access-hj6vn\") pod \"48a59ce7-2c6f-4461-9391-9dca3f2bc630\" (UID: \"48a59ce7-2c6f-4461-9391-9dca3f2bc630\") " Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.705740 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-dns-svc\") pod \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\" (UID: \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\") " Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.705801 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-config\") pod \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\" (UID: \"e7ee7286-968c-48cc-a42e-0f7675b7cbc7\") " Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.706063 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48a59ce7-2c6f-4461-9391-9dca3f2bc630-config" (OuterVolumeSpecName: "config") pod "48a59ce7-2c6f-4461-9391-9dca3f2bc630" (UID: "48a59ce7-2c6f-4461-9391-9dca3f2bc630"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.706386 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-config" (OuterVolumeSpecName: "config") pod "e7ee7286-968c-48cc-a42e-0f7675b7cbc7" (UID: "e7ee7286-968c-48cc-a42e-0f7675b7cbc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.706418 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e7ee7286-968c-48cc-a42e-0f7675b7cbc7" (UID: "e7ee7286-968c-48cc-a42e-0f7675b7cbc7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.706853 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.706904 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.706921 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a59ce7-2c6f-4461-9391-9dca3f2bc630-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.710271 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a59ce7-2c6f-4461-9391-9dca3f2bc630-kube-api-access-hj6vn" (OuterVolumeSpecName: "kube-api-access-hj6vn") pod "48a59ce7-2c6f-4461-9391-9dca3f2bc630" (UID: "48a59ce7-2c6f-4461-9391-9dca3f2bc630"). InnerVolumeSpecName "kube-api-access-hj6vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.710820 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-kube-api-access-dbm2l" (OuterVolumeSpecName: "kube-api-access-dbm2l") pod "e7ee7286-968c-48cc-a42e-0f7675b7cbc7" (UID: "e7ee7286-968c-48cc-a42e-0f7675b7cbc7"). InnerVolumeSpecName "kube-api-access-dbm2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.808128 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbm2l\" (UniqueName: \"kubernetes.io/projected/e7ee7286-968c-48cc-a42e-0f7675b7cbc7-kube-api-access-dbm2l\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:22 crc kubenswrapper[4820]: I0203 12:28:22.808158 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj6vn\" (UniqueName: \"kubernetes.io/projected/48a59ce7-2c6f-4461-9391-9dca3f2bc630-kube-api-access-hj6vn\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:23 crc kubenswrapper[4820]: I0203 12:28:23.325537 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" event={"ID":"e7ee7286-968c-48cc-a42e-0f7675b7cbc7","Type":"ContainerDied","Data":"70cd428753674f9f22198e368076f35586e2ec5902cb2b9b4a19ed79b9968bbf"} Feb 03 12:28:23 crc kubenswrapper[4820]: I0203 12:28:23.325670 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mbwbv" Feb 03 12:28:23 crc kubenswrapper[4820]: I0203 12:28:23.331014 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hx9l9" event={"ID":"48a59ce7-2c6f-4461-9391-9dca3f2bc630","Type":"ContainerDied","Data":"d5ca5c4a3517cce4a5205dac7d9849e3613af1aa107d7e6c7b717474eec8d7e3"} Feb 03 12:28:23 crc kubenswrapper[4820]: I0203 12:28:23.331063 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hx9l9" Feb 03 12:28:23 crc kubenswrapper[4820]: E0203 12:28:23.332112 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="2ae1a10e-b84f-4533-940c-0688f69fae7c" Feb 03 12:28:23 crc kubenswrapper[4820]: I0203 12:28:23.375995 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mbwbv"] Feb 03 12:28:23 crc kubenswrapper[4820]: I0203 12:28:23.388342 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mbwbv"] Feb 03 12:28:23 crc kubenswrapper[4820]: I0203 12:28:23.434784 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hx9l9"] Feb 03 12:28:23 crc kubenswrapper[4820]: I0203 12:28:23.451486 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hx9l9"] Feb 03 12:28:24 crc kubenswrapper[4820]: I0203 12:28:24.342338 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f936af63-a86d-4dc6-aa17-59e2e2b69f5b","Type":"ContainerStarted","Data":"640abf305e70bc9f4ea26eed900f3c6397b48cbf776e01665a31b015652ea7f1"} Feb 03 12:28:24 crc kubenswrapper[4820]: I0203 12:28:24.342977 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f936af63-a86d-4dc6-aa17-59e2e2b69f5b","Type":"ContainerStarted","Data":"66dacc37aa0eac77e2f830576974fe3367bdd5fff3ec61e78555170f69bfd6a6"} Feb 03 12:28:24 crc kubenswrapper[4820]: I0203 12:28:24.344027 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lrcd2" event={"ID":"1a16d012-2c9a-452a-9a18-8d016793a7f6","Type":"ContainerStarted","Data":"b5d969b89f2420ba70d97c7d6739c10bc7421f560086c8db93f2f82fea9d1f65"} Feb 03 12:28:24 crc kubenswrapper[4820]: I0203 12:28:24.346144 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9c7327b-374e-4a6f-a5c7-23136aea36b8","Type":"ContainerStarted","Data":"a2c9a7c83079d2af083ce49d30b37c23495ff45f526570541ec241d7eeeed65e"} Feb 03 12:28:24 crc kubenswrapper[4820]: I0203 12:28:24.346194 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9c7327b-374e-4a6f-a5c7-23136aea36b8","Type":"ContainerStarted","Data":"a3438e2e4a3e7fc1e889a8208fa3bd961cc3f41b5b81bd2cf0358a82f158a73b"} Feb 03 12:28:24 crc kubenswrapper[4820]: I0203 12:28:24.348076 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kk5zn" event={"ID":"7fd50209-6464-4ba1-a7f9-ff9a38317ff2","Type":"ContainerStarted","Data":"66fc4110b7a94c2e7da3181c3ac21e006bd83e6fc77900daa20878df63daafc4"} Feb 03 12:28:24 crc kubenswrapper[4820]: I0203 12:28:24.369490 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.253426323 podStartE2EDuration="43.369473763s" podCreationTimestamp="2026-02-03 12:27:41 +0000 UTC" firstStartedPulling="2026-02-03 12:27:54.424416979 +0000 UTC m=+1391.947492843" lastFinishedPulling="2026-02-03 12:28:22.540464419 +0000 UTC m=+1420.063540283" observedRunningTime="2026-02-03 12:28:24.367695736 +0000 UTC m=+1421.890771600" watchObservedRunningTime="2026-02-03 12:28:24.369473763 +0000 UTC m=+1421.892549627" Feb 03 12:28:24 crc kubenswrapper[4820]: I0203 12:28:24.442407 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=36.350686664 podStartE2EDuration="40.442356671s" podCreationTimestamp="2026-02-03 12:27:44 +0000 UTC" firstStartedPulling="2026-02-03 12:28:19.661566784 +0000 UTC m=+1417.184642648" lastFinishedPulling="2026-02-03 12:28:23.753236781 +0000 UTC m=+1421.276312655" observedRunningTime="2026-02-03 12:28:24.437138513 +0000 UTC m=+1421.960214397" watchObservedRunningTime="2026-02-03 12:28:24.442356671 +0000 UTC m=+1421.965432525" Feb 03 12:28:24 crc kubenswrapper[4820]: I0203 12:28:24.488055 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-lrcd2" podStartSLOduration=36.635762333 podStartE2EDuration="40.488034415s" podCreationTimestamp="2026-02-03 12:27:44 +0000 UTC" firstStartedPulling="2026-02-03 12:28:19.854830142 +0000 UTC m=+1417.377906006" lastFinishedPulling="2026-02-03 12:28:23.707102224 +0000 UTC m=+1421.230178088" observedRunningTime="2026-02-03 12:28:24.48408876 +0000 UTC m=+1422.007164624" watchObservedRunningTime="2026-02-03 12:28:24.488034415 +0000 UTC m=+1422.011110279" Feb 03 12:28:24 crc kubenswrapper[4820]: I0203 12:28:24.904405 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gthq5"] Feb 03 12:28:24 crc kubenswrapper[4820]: I0203 12:28:24.984050 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rckn6"] Feb 03 12:28:24 crc kubenswrapper[4820]: I0203 12:28:24.986062 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:24 crc kubenswrapper[4820]: I0203 12:28:24.994290 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.003645 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rckn6"] Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.115763 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.141833 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xj7bd"] Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.143670 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-config\") pod \"dnsmasq-dns-7fd796d7df-rckn6\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.143723 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rckn6\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.143762 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rckn6\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.143820 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sflxg\" (UniqueName: \"kubernetes.io/projected/f067d36c-378a-4c3d-8a14-a9a468ff746c-kube-api-access-sflxg\") pod \"dnsmasq-dns-7fd796d7df-rckn6\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.159439 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a59ce7-2c6f-4461-9391-9dca3f2bc630" path="/var/lib/kubelet/pods/48a59ce7-2c6f-4461-9391-9dca3f2bc630/volumes" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.160113 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7ee7286-968c-48cc-a42e-0f7675b7cbc7" path="/var/lib/kubelet/pods/e7ee7286-968c-48cc-a42e-0f7675b7cbc7/volumes" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.162468 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.247193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sflxg\" (UniqueName: \"kubernetes.io/projected/f067d36c-378a-4c3d-8a14-a9a468ff746c-kube-api-access-sflxg\") pod \"dnsmasq-dns-7fd796d7df-rckn6\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.247493 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-config\") pod \"dnsmasq-dns-7fd796d7df-rckn6\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.247534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rckn6\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.247580 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rckn6\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.254102 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rckn6\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.280194 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-config\") pod \"dnsmasq-dns-7fd796d7df-rckn6\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.281040 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rckn6\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.433106 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sflxg\" (UniqueName: \"kubernetes.io/projected/f067d36c-378a-4c3d-8a14-a9a468ff746c-kube-api-access-sflxg\") pod \"dnsmasq-dns-7fd796d7df-rckn6\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.438062 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"62eb6ec6-669b-476d-929f-919b7f533a5a","Type":"ContainerStarted","Data":"16baf9e9f5f87ab6f2f078df976f2bf016e5daf6fdba848fbad1d934eb79f9e2"} Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.482095 4820 generic.go:334] "Generic (PLEG): container finished" podID="7fd50209-6464-4ba1-a7f9-ff9a38317ff2" containerID="66fc4110b7a94c2e7da3181c3ac21e006bd83e6fc77900daa20878df63daafc4" exitCode=0 Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.482167 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kk5zn" event={"ID":"7fd50209-6464-4ba1-a7f9-ff9a38317ff2","Type":"ContainerDied","Data":"66fc4110b7a94c2e7da3181c3ac21e006bd83e6fc77900daa20878df63daafc4"} Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.507121 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-r25hq"] Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.509161 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.512767 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.528366 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-r25hq"] Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.627608 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.659464 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.700868 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.700931 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-config\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.700977 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnqht\" (UniqueName: \"kubernetes.io/projected/4f272940-99d0-44a5-b16c-73b2b4f17bba-kube-api-access-bnqht\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.701010 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.701033 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.802409 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5f6w\" (UniqueName: \"kubernetes.io/projected/5deb308f-ef2d-477f-a5ac-04055ea9b76f-kube-api-access-h5f6w\") pod \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\" (UID: \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\") " Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.802520 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5deb308f-ef2d-477f-a5ac-04055ea9b76f-config\") pod \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\" (UID: \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\") " Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.802615 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5deb308f-ef2d-477f-a5ac-04055ea9b76f-dns-svc\") pod \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\" (UID: \"5deb308f-ef2d-477f-a5ac-04055ea9b76f\") " Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.802848 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.803003 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.803045 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-config\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.803102 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnqht\" (UniqueName: \"kubernetes.io/projected/4f272940-99d0-44a5-b16c-73b2b4f17bba-kube-api-access-bnqht\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.803138 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.804169 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.806537 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.809792 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5deb308f-ef2d-477f-a5ac-04055ea9b76f-config" (OuterVolumeSpecName: "config") pod "5deb308f-ef2d-477f-a5ac-04055ea9b76f" (UID: "5deb308f-ef2d-477f-a5ac-04055ea9b76f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.809994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.810413 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5deb308f-ef2d-477f-a5ac-04055ea9b76f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5deb308f-ef2d-477f-a5ac-04055ea9b76f" (UID: "5deb308f-ef2d-477f-a5ac-04055ea9b76f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.812133 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.813391 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5deb308f-ef2d-477f-a5ac-04055ea9b76f-kube-api-access-h5f6w" (OuterVolumeSpecName: "kube-api-access-h5f6w") pod "5deb308f-ef2d-477f-a5ac-04055ea9b76f" (UID: "5deb308f-ef2d-477f-a5ac-04055ea9b76f"). InnerVolumeSpecName "kube-api-access-h5f6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.824201 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-config\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.904759 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5f6w\" (UniqueName: \"kubernetes.io/projected/5deb308f-ef2d-477f-a5ac-04055ea9b76f-kube-api-access-h5f6w\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.904790 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5deb308f-ef2d-477f-a5ac-04055ea9b76f-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.904811 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5deb308f-ef2d-477f-a5ac-04055ea9b76f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:25 crc kubenswrapper[4820]: I0203 12:28:25.919806 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnqht\" (UniqueName: \"kubernetes.io/projected/4f272940-99d0-44a5-b16c-73b2b4f17bba-kube-api-access-bnqht\") pod \"dnsmasq-dns-86db49b7ff-r25hq\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.008404 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/584c5d07-7ab7-44c4-8ba9-9edf834d4912-dns-svc\") pod \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\" (UID: \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\") " Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.008475 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584c5d07-7ab7-44c4-8ba9-9edf834d4912-config\") pod \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\" (UID: \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\") " Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.008545 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7tgm\" (UniqueName: \"kubernetes.io/projected/584c5d07-7ab7-44c4-8ba9-9edf834d4912-kube-api-access-g7tgm\") pod \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\" (UID: \"584c5d07-7ab7-44c4-8ba9-9edf834d4912\") " Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.009254 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/584c5d07-7ab7-44c4-8ba9-9edf834d4912-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "584c5d07-7ab7-44c4-8ba9-9edf834d4912" (UID: "584c5d07-7ab7-44c4-8ba9-9edf834d4912"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.009834 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/584c5d07-7ab7-44c4-8ba9-9edf834d4912-config" (OuterVolumeSpecName: "config") pod "584c5d07-7ab7-44c4-8ba9-9edf834d4912" (UID: "584c5d07-7ab7-44c4-8ba9-9edf834d4912"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.012699 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584c5d07-7ab7-44c4-8ba9-9edf834d4912-kube-api-access-g7tgm" (OuterVolumeSpecName: "kube-api-access-g7tgm") pod "584c5d07-7ab7-44c4-8ba9-9edf834d4912" (UID: "584c5d07-7ab7-44c4-8ba9-9edf834d4912"). InnerVolumeSpecName "kube-api-access-g7tgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.134846 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.135801 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7tgm\" (UniqueName: \"kubernetes.io/projected/584c5d07-7ab7-44c4-8ba9-9edf834d4912-kube-api-access-g7tgm\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.135842 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/584c5d07-7ab7-44c4-8ba9-9edf834d4912-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.135869 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584c5d07-7ab7-44c4-8ba9-9edf834d4912-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.145487 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.186877 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rckn6"] Feb 03 12:28:26 crc kubenswrapper[4820]: W0203 12:28:26.192980 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf067d36c_378a_4c3d_8a14_a9a468ff746c.slice/crio-c40757cd6f08530a407aa985de6b6316ed7546d6acd1a377ee46e7c486cc0f63 WatchSource:0}: Error finding container c40757cd6f08530a407aa985de6b6316ed7546d6acd1a377ee46e7c486cc0f63: Status 404 returned error can't find the container with id c40757cd6f08530a407aa985de6b6316ed7546d6acd1a377ee46e7c486cc0f63 Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.496668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" event={"ID":"f067d36c-378a-4c3d-8a14-a9a468ff746c","Type":"ContainerStarted","Data":"c40757cd6f08530a407aa985de6b6316ed7546d6acd1a377ee46e7c486cc0f63"} Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.500465 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.500446 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xj7bd" event={"ID":"584c5d07-7ab7-44c4-8ba9-9edf834d4912","Type":"ContainerDied","Data":"4e3445c846843aa96c1ea7967c594db586a51ed56232ab408f37df2a860ff5c3"} Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.503306 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18ae976d-57fb-4c6e-8f3d-af9748d3058a","Type":"ContainerStarted","Data":"c10ede440eb95f68092eed228fe7a5cbe1cfc99cc437c5af3a0964e3f2b6c398"} Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.506573 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab","Type":"ContainerStarted","Data":"c59ce139ca9e7ec23efcdff3ee542602e54fb92c3991ccf185c77cc3f134d77a"} Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.508818 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.509176 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gthq5" event={"ID":"5deb308f-ef2d-477f-a5ac-04055ea9b76f","Type":"ContainerDied","Data":"736d9430272ec9b86c239268d47eb9a2e0c201a2d775f1e25b47dc907b1129ec"} Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.612959 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xj7bd"] Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.625986 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xj7bd"] Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.866246 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gthq5"] Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.878113 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gthq5"] Feb 03 12:28:26 crc kubenswrapper[4820]: I0203 12:28:26.965714 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-r25hq"] Feb 03 12:28:27 crc kubenswrapper[4820]: I0203 12:28:27.160529 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584c5d07-7ab7-44c4-8ba9-9edf834d4912" path="/var/lib/kubelet/pods/584c5d07-7ab7-44c4-8ba9-9edf834d4912/volumes" Feb 03 12:28:27 crc kubenswrapper[4820]: I0203 12:28:27.161066 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5deb308f-ef2d-477f-a5ac-04055ea9b76f" path="/var/lib/kubelet/pods/5deb308f-ef2d-477f-a5ac-04055ea9b76f/volumes" Feb 03 12:28:27 crc kubenswrapper[4820]: I0203 12:28:27.519413 4820 generic.go:334] "Generic (PLEG): container finished" podID="f067d36c-378a-4c3d-8a14-a9a468ff746c" containerID="f5532add3eb32631c31a9ccdca53d273af892a3dad0b950860f28fa4737fff9d" exitCode=0 Feb 03 12:28:27 crc kubenswrapper[4820]: I0203 12:28:27.519471 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" event={"ID":"f067d36c-378a-4c3d-8a14-a9a468ff746c","Type":"ContainerDied","Data":"f5532add3eb32631c31a9ccdca53d273af892a3dad0b950860f28fa4737fff9d"} Feb 03 12:28:27 crc kubenswrapper[4820]: I0203 12:28:27.527036 4820 generic.go:334] "Generic (PLEG): container finished" podID="4f272940-99d0-44a5-b16c-73b2b4f17bba" containerID="16969cb76071279f5c431eae7ce9d428b441545817c6ab7b04fc9306bfb48d30" exitCode=0 Feb 03 12:28:27 crc kubenswrapper[4820]: I0203 12:28:27.527471 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" event={"ID":"4f272940-99d0-44a5-b16c-73b2b4f17bba","Type":"ContainerDied","Data":"16969cb76071279f5c431eae7ce9d428b441545817c6ab7b04fc9306bfb48d30"} Feb 03 12:28:27 crc kubenswrapper[4820]: I0203 12:28:27.527549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" event={"ID":"4f272940-99d0-44a5-b16c-73b2b4f17bba","Type":"ContainerStarted","Data":"bd6342753f672c2365ee01a2bca696cf9cc221410684669169cd63e2a8ae546e"} Feb 03 12:28:27 crc kubenswrapper[4820]: I0203 12:28:27.532668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kk5zn" event={"ID":"7fd50209-6464-4ba1-a7f9-ff9a38317ff2","Type":"ContainerStarted","Data":"3d32f61456e9b7f998d751eae668a7d8477b1032023e611d41d6eacfb8f69a10"} Feb 03 12:28:27 crc kubenswrapper[4820]: I0203 12:28:27.532722 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kk5zn" event={"ID":"7fd50209-6464-4ba1-a7f9-ff9a38317ff2","Type":"ContainerStarted","Data":"ec734e4fa61fb1d4daafd1fa8a714aaf8c4fcb5e44f865d4eef5ab7ebcabc579"} Feb 03 12:28:27 crc kubenswrapper[4820]: I0203 12:28:27.533176 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:28:27 crc kubenswrapper[4820]: I0203 12:28:27.533541 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:28:27 crc kubenswrapper[4820]: I0203 12:28:27.535415 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b4c739c-d87f-478c-aec7-07a49da53d46","Type":"ContainerStarted","Data":"a08002938c4907859ccd30a635dec360508222e259a56a5925c02b92bb6c2d7e"} Feb 03 12:28:27 crc kubenswrapper[4820]: I0203 12:28:27.740689 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-kk5zn" podStartSLOduration=10.463835122 podStartE2EDuration="47.740668216s" podCreationTimestamp="2026-02-03 12:27:40 +0000 UTC" firstStartedPulling="2026-02-03 12:27:43.526780942 +0000 UTC m=+1381.049856806" lastFinishedPulling="2026-02-03 12:28:20.803614026 +0000 UTC m=+1418.326689900" observedRunningTime="2026-02-03 12:28:27.604676501 +0000 UTC m=+1425.127752365" watchObservedRunningTime="2026-02-03 12:28:27.740668216 +0000 UTC m=+1425.263744080" Feb 03 12:28:28 crc kubenswrapper[4820]: I0203 12:28:28.164177 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 03 12:28:28 crc kubenswrapper[4820]: I0203 12:28:28.287451 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 03 12:28:28 crc kubenswrapper[4820]: I0203 12:28:28.314724 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 03 12:28:28 crc kubenswrapper[4820]: I0203 12:28:28.546257 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1e865214-494f-4a49-a2e6-2b7316f30a92","Type":"ContainerStarted","Data":"d26c6fef47d1f1618da5645536eb4d82066e2697fa07bc562561421aba7d728f"} Feb 03 12:28:28 crc kubenswrapper[4820]: I0203 12:28:28.547935 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ace9a08e-e106-4d85-ae21-3d7d6ea60dff","Type":"ContainerStarted","Data":"b42d7550a2c34e60059d0877397c55b72964f659f3ab19cb0dacdfa7ef80f32b"} Feb 03 12:28:28 crc kubenswrapper[4820]: I0203 12:28:28.548143 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 03 12:28:28 crc kubenswrapper[4820]: I0203 12:28:28.549841 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" event={"ID":"f067d36c-378a-4c3d-8a14-a9a468ff746c","Type":"ContainerStarted","Data":"7d05d40a77fc8030e1b912c8626ead74e3c2a739ba852907d998813ba833ba15"} Feb 03 12:28:28 crc kubenswrapper[4820]: I0203 12:28:28.550107 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:28 crc kubenswrapper[4820]: I0203 12:28:28.551610 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" event={"ID":"4f272940-99d0-44a5-b16c-73b2b4f17bba","Type":"ContainerStarted","Data":"79f1610b2a1f0aff89134774b059753aa69330068731a2fb3f06b79c922fa21d"} Feb 03 12:28:28 crc kubenswrapper[4820]: I0203 12:28:28.610856 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" podStartSLOduration=3.8205190399999998 podStartE2EDuration="4.61083507s" podCreationTimestamp="2026-02-03 12:28:24 +0000 UTC" firstStartedPulling="2026-02-03 12:28:26.195678923 +0000 UTC m=+1423.718754787" lastFinishedPulling="2026-02-03 12:28:26.985994953 +0000 UTC m=+1424.509070817" observedRunningTime="2026-02-03 12:28:28.595999655 +0000 UTC m=+1426.119075529" watchObservedRunningTime="2026-02-03 12:28:28.61083507 +0000 UTC m=+1426.133910934" Feb 03 12:28:28 crc kubenswrapper[4820]: I0203 12:28:28.627314 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" podStartSLOduration=3.627293217 podStartE2EDuration="3.627293217s" podCreationTimestamp="2026-02-03 12:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:28:28.621945966 +0000 UTC m=+1426.145021840" watchObservedRunningTime="2026-02-03 12:28:28.627293217 +0000 UTC m=+1426.150369081" Feb 03 12:28:28 crc kubenswrapper[4820]: I0203 12:28:28.647441 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.252207763 podStartE2EDuration="52.647422033s" podCreationTimestamp="2026-02-03 12:27:36 +0000 UTC" firstStartedPulling="2026-02-03 12:27:38.324744476 +0000 UTC m=+1375.847820340" lastFinishedPulling="2026-02-03 12:28:27.719958746 +0000 UTC m=+1425.243034610" observedRunningTime="2026-02-03 12:28:28.644391612 +0000 UTC m=+1426.167467496" watchObservedRunningTime="2026-02-03 12:28:28.647422033 +0000 UTC m=+1426.170497897" Feb 03 12:28:29 crc kubenswrapper[4820]: I0203 12:28:29.560523 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:30 crc kubenswrapper[4820]: I0203 12:28:30.569446 4820 generic.go:334] "Generic (PLEG): container finished" podID="e8e46f8a-5de0-457f-b8eb-f76e8902e8ab" containerID="c59ce139ca9e7ec23efcdff3ee542602e54fb92c3991ccf185c77cc3f134d77a" exitCode=0 Feb 03 12:28:30 crc kubenswrapper[4820]: I0203 12:28:30.569532 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab","Type":"ContainerDied","Data":"c59ce139ca9e7ec23efcdff3ee542602e54fb92c3991ccf185c77cc3f134d77a"} Feb 03 12:28:31 crc kubenswrapper[4820]: I0203 12:28:31.156683 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 03 12:28:31 crc kubenswrapper[4820]: I0203 12:28:31.580285 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e8e46f8a-5de0-457f-b8eb-f76e8902e8ab","Type":"ContainerStarted","Data":"ef40e7654c0e84871035f35596e67b131b368a726079c5a20d84d6700f4008cd"} Feb 03 12:28:31 crc kubenswrapper[4820]: I0203 12:28:31.611054 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.115516338 podStartE2EDuration="58.611029179s" podCreationTimestamp="2026-02-03 12:27:33 +0000 UTC" firstStartedPulling="2026-02-03 12:27:37.427962115 +0000 UTC m=+1374.951037979" lastFinishedPulling="2026-02-03 12:28:25.923474956 +0000 UTC m=+1423.446550820" observedRunningTime="2026-02-03 12:28:31.603565511 +0000 UTC m=+1429.126641385" watchObservedRunningTime="2026-02-03 12:28:31.611029179 +0000 UTC m=+1429.134105043" Feb 03 12:28:32 crc kubenswrapper[4820]: I0203 12:28:32.462847 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 03 12:28:32 crc kubenswrapper[4820]: I0203 12:28:32.592624 4820 generic.go:334] "Generic (PLEG): container finished" podID="1e865214-494f-4a49-a2e6-2b7316f30a92" containerID="d26c6fef47d1f1618da5645536eb4d82066e2697fa07bc562561421aba7d728f" exitCode=0 Feb 03 12:28:32 crc kubenswrapper[4820]: I0203 12:28:32.592678 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1e865214-494f-4a49-a2e6-2b7316f30a92","Type":"ContainerDied","Data":"d26c6fef47d1f1618da5645536eb4d82066e2697fa07bc562561421aba7d728f"} Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.201809 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.623212 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1e865214-494f-4a49-a2e6-2b7316f30a92","Type":"ContainerStarted","Data":"ed4c51acfd11c55cb2b249fb13d2ce0ed2fa6bc7ddd29fe58a2dd857da40adcb"} Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.672641 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371977.182156 podStartE2EDuration="59.672619855s" podCreationTimestamp="2026-02-03 12:27:34 +0000 UTC" firstStartedPulling="2026-02-03 12:27:38.175239012 +0000 UTC m=+1375.698314876" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:28:33.668434062 +0000 UTC m=+1431.191509956" watchObservedRunningTime="2026-02-03 12:28:33.672619855 +0000 UTC m=+1431.195695719" Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.797339 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.799044 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.801864 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-zbhp5" Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.802045 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.803874 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.806162 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.817970 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.983298 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.983347 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-config\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.983549 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.983634 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.983726 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.983816 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-scripts\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:33 crc kubenswrapper[4820]: I0203 12:28:33.983905 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phb2q\" (UniqueName: \"kubernetes.io/projected/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-kube-api-access-phb2q\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.094746 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.094819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-scripts\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.094852 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phb2q\" (UniqueName: \"kubernetes.io/projected/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-kube-api-access-phb2q\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.094973 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.095000 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-config\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.095068 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.095103 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.095781 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.096407 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-config\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.096675 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-scripts\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.103934 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.104218 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.107804 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.116159 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phb2q\" (UniqueName: \"kubernetes.io/projected/d248d6d6-d6ff-415a-9ea6-d65cde5ad964-kube-api-access-phb2q\") pod \"ovn-northd-0\" (UID: \"d248d6d6-d6ff-415a-9ea6-d65cde5ad964\") " pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.118438 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.633717 4820 generic.go:334] "Generic (PLEG): container finished" podID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerID="a08002938c4907859ccd30a635dec360508222e259a56a5925c02b92bb6c2d7e" exitCode=0 Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.634030 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b4c739c-d87f-478c-aec7-07a49da53d46","Type":"ContainerDied","Data":"a08002938c4907859ccd30a635dec360508222e259a56a5925c02b92bb6c2d7e"} Feb 03 12:28:34 crc kubenswrapper[4820]: W0203 12:28:34.649553 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd248d6d6_d6ff_415a_9ea6_d65cde5ad964.slice/crio-2b6dbf494cb760f9e76c1bf19c1ce90cac489bfe90d0933be5feba11cbbf60d6 WatchSource:0}: Error finding container 2b6dbf494cb760f9e76c1bf19c1ce90cac489bfe90d0933be5feba11cbbf60d6: Status 404 returned error can't find the container with id 2b6dbf494cb760f9e76c1bf19c1ce90cac489bfe90d0933be5feba11cbbf60d6 Feb 03 12:28:34 crc kubenswrapper[4820]: I0203 12:28:34.665422 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 03 12:28:35 crc kubenswrapper[4820]: I0203 12:28:35.349378 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 03 12:28:35 crc kubenswrapper[4820]: I0203 12:28:35.350439 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 03 12:28:35 crc kubenswrapper[4820]: I0203 12:28:35.635424 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:35 crc kubenswrapper[4820]: I0203 12:28:35.674724 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d248d6d6-d6ff-415a-9ea6-d65cde5ad964","Type":"ContainerStarted","Data":"2b6dbf494cb760f9e76c1bf19c1ce90cac489bfe90d0933be5feba11cbbf60d6"} Feb 03 12:28:35 crc kubenswrapper[4820]: I0203 12:28:35.889155 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96p5d" event={"ID":"b3b01895-53e1-4391-8d1e-8f2458d4f2e0","Type":"ContainerStarted","Data":"dca677cfc316c22dec81ca0f390f2bb780140e10e455a7c7c5383e4fa0324695"} Feb 03 12:28:35 crc kubenswrapper[4820]: I0203 12:28:35.889394 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-96p5d" Feb 03 12:28:35 crc kubenswrapper[4820]: I0203 12:28:35.928464 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-96p5d" podStartSLOduration=3.575833662 podStartE2EDuration="55.928437851s" podCreationTimestamp="2026-02-03 12:27:40 +0000 UTC" firstStartedPulling="2026-02-03 12:27:42.368490269 +0000 UTC m=+1379.891566133" lastFinishedPulling="2026-02-03 12:28:34.721094458 +0000 UTC m=+1432.244170322" observedRunningTime="2026-02-03 12:28:35.918130013 +0000 UTC m=+1433.441205877" watchObservedRunningTime="2026-02-03 12:28:35.928437851 +0000 UTC m=+1433.451513715" Feb 03 12:28:36 crc kubenswrapper[4820]: I0203 12:28:36.147080 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:36 crc kubenswrapper[4820]: I0203 12:28:36.203028 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rckn6"] Feb 03 12:28:36 crc kubenswrapper[4820]: I0203 12:28:36.982273 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d248d6d6-d6ff-415a-9ea6-d65cde5ad964","Type":"ContainerStarted","Data":"a051b647650aebcfe0cdb1e5d2537aee29cb6c02714760af11794bad542347e9"} Feb 03 12:28:36 crc kubenswrapper[4820]: I0203 12:28:36.982854 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d248d6d6-d6ff-415a-9ea6-d65cde5ad964","Type":"ContainerStarted","Data":"9fd90e2482c30259ab2166a661df496c99004e10af70e88fdafd5183c29570f3"} Feb 03 12:28:36 crc kubenswrapper[4820]: I0203 12:28:36.982876 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 03 12:28:36 crc kubenswrapper[4820]: I0203 12:28:36.982648 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" podUID="f067d36c-378a-4c3d-8a14-a9a468ff746c" containerName="dnsmasq-dns" containerID="cri-o://7d05d40a77fc8030e1b912c8626ead74e3c2a739ba852907d998813ba833ba15" gracePeriod=10 Feb 03 12:28:37 crc kubenswrapper[4820]: I0203 12:28:37.006174 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.454083103 podStartE2EDuration="4.006150944s" podCreationTimestamp="2026-02-03 12:28:33 +0000 UTC" firstStartedPulling="2026-02-03 12:28:34.656503904 +0000 UTC m=+1432.179579768" lastFinishedPulling="2026-02-03 12:28:36.208571745 +0000 UTC m=+1433.731647609" observedRunningTime="2026-02-03 12:28:37.002252118 +0000 UTC m=+1434.525328002" watchObservedRunningTime="2026-02-03 12:28:37.006150944 +0000 UTC m=+1434.529226808" Feb 03 12:28:37 crc kubenswrapper[4820]: I0203 12:28:37.332340 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 03 12:28:37 crc kubenswrapper[4820]: I0203 12:28:37.332445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 03 12:28:37 crc kubenswrapper[4820]: I0203 12:28:37.915184 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:37 crc kubenswrapper[4820]: I0203 12:28:37.993548 4820 generic.go:334] "Generic (PLEG): container finished" podID="f067d36c-378a-4c3d-8a14-a9a468ff746c" containerID="7d05d40a77fc8030e1b912c8626ead74e3c2a739ba852907d998813ba833ba15" exitCode=0 Feb 03 12:28:37 crc kubenswrapper[4820]: I0203 12:28:37.993625 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" Feb 03 12:28:37 crc kubenswrapper[4820]: I0203 12:28:37.995472 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" event={"ID":"f067d36c-378a-4c3d-8a14-a9a468ff746c","Type":"ContainerDied","Data":"7d05d40a77fc8030e1b912c8626ead74e3c2a739ba852907d998813ba833ba15"} Feb 03 12:28:37 crc kubenswrapper[4820]: I0203 12:28:37.995603 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rckn6" event={"ID":"f067d36c-378a-4c3d-8a14-a9a468ff746c","Type":"ContainerDied","Data":"c40757cd6f08530a407aa985de6b6316ed7546d6acd1a377ee46e7c486cc0f63"} Feb 03 12:28:37 crc kubenswrapper[4820]: I0203 12:28:37.995763 4820 scope.go:117] "RemoveContainer" containerID="7d05d40a77fc8030e1b912c8626ead74e3c2a739ba852907d998813ba833ba15" Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.019221 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-config\") pod \"f067d36c-378a-4c3d-8a14-a9a468ff746c\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.019498 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sflxg\" (UniqueName: \"kubernetes.io/projected/f067d36c-378a-4c3d-8a14-a9a468ff746c-kube-api-access-sflxg\") pod \"f067d36c-378a-4c3d-8a14-a9a468ff746c\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.019525 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-dns-svc\") pod \"f067d36c-378a-4c3d-8a14-a9a468ff746c\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.019579 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-ovsdbserver-nb\") pod \"f067d36c-378a-4c3d-8a14-a9a468ff746c\" (UID: \"f067d36c-378a-4c3d-8a14-a9a468ff746c\") " Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.025420 4820 scope.go:117] "RemoveContainer" containerID="f5532add3eb32631c31a9ccdca53d273af892a3dad0b950860f28fa4737fff9d" Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.047065 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f067d36c-378a-4c3d-8a14-a9a468ff746c-kube-api-access-sflxg" (OuterVolumeSpecName: "kube-api-access-sflxg") pod "f067d36c-378a-4c3d-8a14-a9a468ff746c" (UID: "f067d36c-378a-4c3d-8a14-a9a468ff746c"). InnerVolumeSpecName "kube-api-access-sflxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.062287 4820 scope.go:117] "RemoveContainer" containerID="7d05d40a77fc8030e1b912c8626ead74e3c2a739ba852907d998813ba833ba15" Feb 03 12:28:38 crc kubenswrapper[4820]: E0203 12:28:38.062792 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d05d40a77fc8030e1b912c8626ead74e3c2a739ba852907d998813ba833ba15\": container with ID starting with 7d05d40a77fc8030e1b912c8626ead74e3c2a739ba852907d998813ba833ba15 not found: ID does not exist" containerID="7d05d40a77fc8030e1b912c8626ead74e3c2a739ba852907d998813ba833ba15" Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.062848 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d05d40a77fc8030e1b912c8626ead74e3c2a739ba852907d998813ba833ba15"} err="failed to get container status \"7d05d40a77fc8030e1b912c8626ead74e3c2a739ba852907d998813ba833ba15\": rpc error: code = NotFound desc = could not find container \"7d05d40a77fc8030e1b912c8626ead74e3c2a739ba852907d998813ba833ba15\": container with ID starting with 7d05d40a77fc8030e1b912c8626ead74e3c2a739ba852907d998813ba833ba15 not found: ID does not exist" Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.062878 4820 scope.go:117] "RemoveContainer" containerID="f5532add3eb32631c31a9ccdca53d273af892a3dad0b950860f28fa4737fff9d" Feb 03 12:28:38 crc kubenswrapper[4820]: E0203 12:28:38.063301 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5532add3eb32631c31a9ccdca53d273af892a3dad0b950860f28fa4737fff9d\": container with ID starting with f5532add3eb32631c31a9ccdca53d273af892a3dad0b950860f28fa4737fff9d not found: ID does not exist" containerID="f5532add3eb32631c31a9ccdca53d273af892a3dad0b950860f28fa4737fff9d" Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.063591 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5532add3eb32631c31a9ccdca53d273af892a3dad0b950860f28fa4737fff9d"} err="failed to get container status \"f5532add3eb32631c31a9ccdca53d273af892a3dad0b950860f28fa4737fff9d\": rpc error: code = NotFound desc = could not find container \"f5532add3eb32631c31a9ccdca53d273af892a3dad0b950860f28fa4737fff9d\": container with ID starting with f5532add3eb32631c31a9ccdca53d273af892a3dad0b950860f28fa4737fff9d not found: ID does not exist" Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.073782 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f067d36c-378a-4c3d-8a14-a9a468ff746c" (UID: "f067d36c-378a-4c3d-8a14-a9a468ff746c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.077809 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f067d36c-378a-4c3d-8a14-a9a468ff746c" (UID: "f067d36c-378a-4c3d-8a14-a9a468ff746c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.098016 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-config" (OuterVolumeSpecName: "config") pod "f067d36c-378a-4c3d-8a14-a9a468ff746c" (UID: "f067d36c-378a-4c3d-8a14-a9a468ff746c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.124183 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.124228 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sflxg\" (UniqueName: \"kubernetes.io/projected/f067d36c-378a-4c3d-8a14-a9a468ff746c-kube-api-access-sflxg\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.124241 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.124254 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f067d36c-378a-4c3d-8a14-a9a468ff746c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.651669 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rckn6"] Feb 03 12:28:38 crc kubenswrapper[4820]: I0203 12:28:38.659433 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rckn6"] Feb 03 12:28:39 crc kubenswrapper[4820]: I0203 12:28:39.194978 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f067d36c-378a-4c3d-8a14-a9a468ff746c" path="/var/lib/kubelet/pods/f067d36c-378a-4c3d-8a14-a9a468ff746c/volumes" Feb 03 12:28:39 crc kubenswrapper[4820]: I0203 12:28:39.787317 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 03 12:28:39 crc kubenswrapper[4820]: I0203 12:28:39.909652 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.246431 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-zbjrj"] Feb 03 12:28:40 crc kubenswrapper[4820]: E0203 12:28:40.247441 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f067d36c-378a-4c3d-8a14-a9a468ff746c" containerName="init" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.247468 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f067d36c-378a-4c3d-8a14-a9a468ff746c" containerName="init" Feb 03 12:28:40 crc kubenswrapper[4820]: E0203 12:28:40.247487 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f067d36c-378a-4c3d-8a14-a9a468ff746c" containerName="dnsmasq-dns" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.247494 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f067d36c-378a-4c3d-8a14-a9a468ff746c" containerName="dnsmasq-dns" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.247690 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f067d36c-378a-4c3d-8a14-a9a468ff746c" containerName="dnsmasq-dns" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.248726 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.265628 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zbjrj"] Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.328411 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-6fs49"] Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.364498 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-6fs49" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.365105 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-49a9-account-create-update-fpqm2"] Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.366695 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-49a9-account-create-update-fpqm2" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.369012 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.388745 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-6fs49"] Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.396454 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.396827 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-config\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.396984 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.397189 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-dns-svc\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.397326 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8hx8\" (UniqueName: \"kubernetes.io/projected/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-kube-api-access-x8hx8\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.402318 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-49a9-account-create-update-fpqm2"] Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.499364 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.499432 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-config\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.499474 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.499502 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msqn5\" (UniqueName: \"kubernetes.io/projected/e928b295-8806-4a21-aaf7-d59749562244-kube-api-access-msqn5\") pod \"watcher-db-create-6fs49\" (UID: \"e928b295-8806-4a21-aaf7-d59749562244\") " pod="openstack/watcher-db-create-6fs49" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.499566 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-dns-svc\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.499599 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8hx8\" (UniqueName: \"kubernetes.io/projected/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-kube-api-access-x8hx8\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.499638 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e928b295-8806-4a21-aaf7-d59749562244-operator-scripts\") pod \"watcher-db-create-6fs49\" (UID: \"e928b295-8806-4a21-aaf7-d59749562244\") " pod="openstack/watcher-db-create-6fs49" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.499702 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjgrs\" (UniqueName: \"kubernetes.io/projected/eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb-kube-api-access-bjgrs\") pod \"watcher-49a9-account-create-update-fpqm2\" (UID: \"eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb\") " pod="openstack/watcher-49a9-account-create-update-fpqm2" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.499749 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb-operator-scripts\") pod \"watcher-49a9-account-create-update-fpqm2\" (UID: \"eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb\") " pod="openstack/watcher-49a9-account-create-update-fpqm2" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.500873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-dns-svc\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.500878 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.501558 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-config\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.501838 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.533731 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8hx8\" (UniqueName: \"kubernetes.io/projected/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-kube-api-access-x8hx8\") pod \"dnsmasq-dns-698758b865-zbjrj\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.605326 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e928b295-8806-4a21-aaf7-d59749562244-operator-scripts\") pod \"watcher-db-create-6fs49\" (UID: \"e928b295-8806-4a21-aaf7-d59749562244\") " pod="openstack/watcher-db-create-6fs49" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.605452 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjgrs\" (UniqueName: \"kubernetes.io/projected/eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb-kube-api-access-bjgrs\") pod \"watcher-49a9-account-create-update-fpqm2\" (UID: \"eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb\") " pod="openstack/watcher-49a9-account-create-update-fpqm2" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.605522 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb-operator-scripts\") pod \"watcher-49a9-account-create-update-fpqm2\" (UID: \"eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb\") " pod="openstack/watcher-49a9-account-create-update-fpqm2" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.605660 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msqn5\" (UniqueName: \"kubernetes.io/projected/e928b295-8806-4a21-aaf7-d59749562244-kube-api-access-msqn5\") pod \"watcher-db-create-6fs49\" (UID: \"e928b295-8806-4a21-aaf7-d59749562244\") " pod="openstack/watcher-db-create-6fs49" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.751995 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e928b295-8806-4a21-aaf7-d59749562244-operator-scripts\") pod \"watcher-db-create-6fs49\" (UID: \"e928b295-8806-4a21-aaf7-d59749562244\") " pod="openstack/watcher-db-create-6fs49" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.752181 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb-operator-scripts\") pod \"watcher-49a9-account-create-update-fpqm2\" (UID: \"eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb\") " pod="openstack/watcher-49a9-account-create-update-fpqm2" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.752425 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.781414 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msqn5\" (UniqueName: \"kubernetes.io/projected/e928b295-8806-4a21-aaf7-d59749562244-kube-api-access-msqn5\") pod \"watcher-db-create-6fs49\" (UID: \"e928b295-8806-4a21-aaf7-d59749562244\") " pod="openstack/watcher-db-create-6fs49" Feb 03 12:28:40 crc kubenswrapper[4820]: I0203 12:28:40.782586 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjgrs\" (UniqueName: \"kubernetes.io/projected/eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb-kube-api-access-bjgrs\") pod \"watcher-49a9-account-create-update-fpqm2\" (UID: \"eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb\") " pod="openstack/watcher-49a9-account-create-update-fpqm2" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.017707 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-6fs49" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.021988 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-49a9-account-create-update-fpqm2" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.496840 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.562672 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.576788 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.581785 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.581900 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-4xg59" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.582191 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.582345 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.589939 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.632502 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="1e865214-494f-4a49-a2e6-2b7316f30a92" containerName="galera" probeResult="failure" output=< Feb 03 12:28:41 crc kubenswrapper[4820]: wsrep_local_state_comment (Joined) differs from Synced Feb 03 12:28:41 crc kubenswrapper[4820]: > Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.691592 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d4eb10ed-a945-4b23-8fb3-62022a90e09f-cache\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.691706 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.691805 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.691913 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xskgf\" (UniqueName: \"kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-kube-api-access-xskgf\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.691943 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d4eb10ed-a945-4b23-8fb3-62022a90e09f-lock\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.692036 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4eb10ed-a945-4b23-8fb3-62022a90e09f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.793696 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xskgf\" (UniqueName: \"kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-kube-api-access-xskgf\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.794062 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d4eb10ed-a945-4b23-8fb3-62022a90e09f-lock\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.794151 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4eb10ed-a945-4b23-8fb3-62022a90e09f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.794788 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d4eb10ed-a945-4b23-8fb3-62022a90e09f-lock\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.795909 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d4eb10ed-a945-4b23-8fb3-62022a90e09f-cache\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.796064 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: E0203 12:28:41.796204 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 12:28:41 crc kubenswrapper[4820]: E0203 12:28:41.796243 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 12:28:41 crc kubenswrapper[4820]: E0203 12:28:41.796294 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift podName:d4eb10ed-a945-4b23-8fb3-62022a90e09f nodeName:}" failed. No retries permitted until 2026-02-03 12:28:42.296278436 +0000 UTC m=+1439.819354290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift") pod "swift-storage-0" (UID: "d4eb10ed-a945-4b23-8fb3-62022a90e09f") : configmap "swift-ring-files" not found Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.796212 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.796381 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d4eb10ed-a945-4b23-8fb3-62022a90e09f-cache\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:41 crc kubenswrapper[4820]: I0203 12:28:41.987103 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4eb10ed-a945-4b23-8fb3-62022a90e09f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.009552 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.023628 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zbjrj"] Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.030487 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xskgf\" (UniqueName: \"kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-kube-api-access-xskgf\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.084919 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2ae1a10e-b84f-4533-940c-0688f69fae7c","Type":"ContainerStarted","Data":"81d31e72f91a59cefb8639c6016ebb5627711f6180857a366d25f0dddf77f758"} Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.085187 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.107331 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.116412 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.445976655 podStartE2EDuration="1m3.11638603s" podCreationTimestamp="2026-02-03 12:27:39 +0000 UTC" firstStartedPulling="2026-02-03 12:27:41.223667434 +0000 UTC m=+1378.746743298" lastFinishedPulling="2026-02-03 12:28:39.894076809 +0000 UTC m=+1437.417152673" observedRunningTime="2026-02-03 12:28:42.111849738 +0000 UTC m=+1439.634925612" watchObservedRunningTime="2026-02-03 12:28:42.11638603 +0000 UTC m=+1439.639461904" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.215514 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-685tt"] Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.217395 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.228626 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.228860 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.229034 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.249642 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-685tt"] Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.269578 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-685tt"] Feb 03 12:28:42 crc kubenswrapper[4820]: E0203 12:28:42.270672 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-zfr4s ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-zfr4s ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-685tt" podUID="58891d63-85ed-47b4-be86-ad007e0f1a15" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.285041 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-pslmr"] Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.286767 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.320208 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94423319-f57f-47dd-80db-db41374dcb25-scripts\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.320278 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj4qf\" (UniqueName: \"kubernetes.io/projected/94423319-f57f-47dd-80db-db41374dcb25-kube-api-access-lj4qf\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.320306 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58891d63-85ed-47b4-be86-ad007e0f1a15-scripts\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.320523 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/94423319-f57f-47dd-80db-db41374dcb25-etc-swift\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.320598 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.320679 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfr4s\" (UniqueName: \"kubernetes.io/projected/58891d63-85ed-47b4-be86-ad007e0f1a15-kube-api-access-zfr4s\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.320717 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-dispersionconf\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.320785 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-swiftconf\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.320843 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-combined-ca-bundle\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.321023 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-dispersionconf\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.321092 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-combined-ca-bundle\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.321245 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/58891d63-85ed-47b4-be86-ad007e0f1a15-ring-data-devices\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.321341 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-swiftconf\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: E0203 12:28:42.321646 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 12:28:42 crc kubenswrapper[4820]: E0203 12:28:42.321666 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 12:28:42 crc kubenswrapper[4820]: E0203 12:28:42.321717 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift podName:d4eb10ed-a945-4b23-8fb3-62022a90e09f nodeName:}" failed. No retries permitted until 2026-02-03 12:28:43.321699405 +0000 UTC m=+1440.844775329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift") pod "swift-storage-0" (UID: "d4eb10ed-a945-4b23-8fb3-62022a90e09f") : configmap "swift-ring-files" not found Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.321745 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/58891d63-85ed-47b4-be86-ad007e0f1a15-etc-swift\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.321795 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/94423319-f57f-47dd-80db-db41374dcb25-ring-data-devices\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.339765 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-pslmr"] Feb 03 12:28:42 crc kubenswrapper[4820]: W0203 12:28:42.371702 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb6cc3c6_65e8_4def_9490_6d1a4a5f13eb.slice/crio-e9edeac9f1ea44c67b79ef28e7ebe811b9fb474505df316168eebedcd2c754a0 WatchSource:0}: Error finding container e9edeac9f1ea44c67b79ef28e7ebe811b9fb474505df316168eebedcd2c754a0: Status 404 returned error can't find the container with id e9edeac9f1ea44c67b79ef28e7ebe811b9fb474505df316168eebedcd2c754a0 Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.373925 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-49a9-account-create-update-fpqm2"] Feb 03 12:28:42 crc kubenswrapper[4820]: W0203 12:28:42.395290 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode928b295_8806_4a21_aaf7_d59749562244.slice/crio-8de3264001ff0adc48904512f0f72e57507daf4441de47dae90c3497a82d898d WatchSource:0}: Error finding container 8de3264001ff0adc48904512f0f72e57507daf4441de47dae90c3497a82d898d: Status 404 returned error can't find the container with id 8de3264001ff0adc48904512f0f72e57507daf4441de47dae90c3497a82d898d Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.398711 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-6fs49"] Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.545450 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-swiftconf\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.545556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/58891d63-85ed-47b4-be86-ad007e0f1a15-etc-swift\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.545607 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/94423319-f57f-47dd-80db-db41374dcb25-ring-data-devices\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.545654 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94423319-f57f-47dd-80db-db41374dcb25-scripts\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.545701 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj4qf\" (UniqueName: \"kubernetes.io/projected/94423319-f57f-47dd-80db-db41374dcb25-kube-api-access-lj4qf\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.545721 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58891d63-85ed-47b4-be86-ad007e0f1a15-scripts\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.545776 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/94423319-f57f-47dd-80db-db41374dcb25-etc-swift\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.545878 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfr4s\" (UniqueName: \"kubernetes.io/projected/58891d63-85ed-47b4-be86-ad007e0f1a15-kube-api-access-zfr4s\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.546009 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-dispersionconf\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.546205 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/58891d63-85ed-47b4-be86-ad007e0f1a15-etc-swift\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.546331 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-swiftconf\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.546361 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-combined-ca-bundle\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.546417 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-dispersionconf\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.546864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-combined-ca-bundle\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.546931 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/58891d63-85ed-47b4-be86-ad007e0f1a15-ring-data-devices\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.546864 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/94423319-f57f-47dd-80db-db41374dcb25-ring-data-devices\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.546690 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/94423319-f57f-47dd-80db-db41374dcb25-etc-swift\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.547231 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94423319-f57f-47dd-80db-db41374dcb25-scripts\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.546682 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58891d63-85ed-47b4-be86-ad007e0f1a15-scripts\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.547782 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/58891d63-85ed-47b4-be86-ad007e0f1a15-ring-data-devices\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.552569 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-swiftconf\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.553053 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-combined-ca-bundle\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.555376 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-dispersionconf\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.560116 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-dispersionconf\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.560184 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-combined-ca-bundle\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.564385 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-swiftconf\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.566166 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfr4s\" (UniqueName: \"kubernetes.io/projected/58891d63-85ed-47b4-be86-ad007e0f1a15-kube-api-access-zfr4s\") pod \"swift-ring-rebalance-685tt\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.578269 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj4qf\" (UniqueName: \"kubernetes.io/projected/94423319-f57f-47dd-80db-db41374dcb25-kube-api-access-lj4qf\") pod \"swift-ring-rebalance-pslmr\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.667658 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.697302 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-6g6lc"] Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.699145 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6g6lc" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.707194 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6g6lc"] Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.749405 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss8rr\" (UniqueName: \"kubernetes.io/projected/1336b61c-ed56-40f3-b2cd-1d476b33459b-kube-api-access-ss8rr\") pod \"glance-db-create-6g6lc\" (UID: \"1336b61c-ed56-40f3-b2cd-1d476b33459b\") " pod="openstack/glance-db-create-6g6lc" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.749861 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1336b61c-ed56-40f3-b2cd-1d476b33459b-operator-scripts\") pod \"glance-db-create-6g6lc\" (UID: \"1336b61c-ed56-40f3-b2cd-1d476b33459b\") " pod="openstack/glance-db-create-6g6lc" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.852955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss8rr\" (UniqueName: \"kubernetes.io/projected/1336b61c-ed56-40f3-b2cd-1d476b33459b-kube-api-access-ss8rr\") pod \"glance-db-create-6g6lc\" (UID: \"1336b61c-ed56-40f3-b2cd-1d476b33459b\") " pod="openstack/glance-db-create-6g6lc" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.853188 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1336b61c-ed56-40f3-b2cd-1d476b33459b-operator-scripts\") pod \"glance-db-create-6g6lc\" (UID: \"1336b61c-ed56-40f3-b2cd-1d476b33459b\") " pod="openstack/glance-db-create-6g6lc" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.855125 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1336b61c-ed56-40f3-b2cd-1d476b33459b-operator-scripts\") pod \"glance-db-create-6g6lc\" (UID: \"1336b61c-ed56-40f3-b2cd-1d476b33459b\") " pod="openstack/glance-db-create-6g6lc" Feb 03 12:28:42 crc kubenswrapper[4820]: I0203 12:28:42.879104 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss8rr\" (UniqueName: \"kubernetes.io/projected/1336b61c-ed56-40f3-b2cd-1d476b33459b-kube-api-access-ss8rr\") pod \"glance-db-create-6g6lc\" (UID: \"1336b61c-ed56-40f3-b2cd-1d476b33459b\") " pod="openstack/glance-db-create-6g6lc" Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.006134 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-863d-account-create-update-sj894"] Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.008106 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-863d-account-create-update-sj894" Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.011786 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.032023 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-863d-account-create-update-sj894"] Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.032428 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6g6lc" Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.057537 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdce215-5dd4-4a45-a099-ac2b51edf843-operator-scripts\") pod \"glance-863d-account-create-update-sj894\" (UID: \"bbdce215-5dd4-4a45-a099-ac2b51edf843\") " pod="openstack/glance-863d-account-create-update-sj894" Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.057600 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmrr8\" (UniqueName: \"kubernetes.io/projected/bbdce215-5dd4-4a45-a099-ac2b51edf843-kube-api-access-hmrr8\") pod \"glance-863d-account-create-update-sj894\" (UID: \"bbdce215-5dd4-4a45-a099-ac2b51edf843\") " pod="openstack/glance-863d-account-create-update-sj894" Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.107302 4820 generic.go:334] "Generic (PLEG): container finished" podID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" containerID="247e3c9da2b66de7933df2e610563093cec9ab654304f0bbf0826f3a6039c4e8" exitCode=0 Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.107414 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zbjrj" event={"ID":"60e67a4a-d840-4bc2-9f74-4a5fbb36a829","Type":"ContainerDied","Data":"247e3c9da2b66de7933df2e610563093cec9ab654304f0bbf0826f3a6039c4e8"} Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.107464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zbjrj" event={"ID":"60e67a4a-d840-4bc2-9f74-4a5fbb36a829","Type":"ContainerStarted","Data":"cf8c6890c857e094df065f12558cdcbf428549b66f887e8b566ee0b472b3cc06"} Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.122667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-6fs49" event={"ID":"e928b295-8806-4a21-aaf7-d59749562244","Type":"ContainerStarted","Data":"43e1b45809d7f422866afa961cd6e49b33b85adb0eb6b43b3637b1ea4e3a0d81"} Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.122740 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-6fs49" event={"ID":"e928b295-8806-4a21-aaf7-d59749562244","Type":"ContainerStarted","Data":"8de3264001ff0adc48904512f0f72e57507daf4441de47dae90c3497a82d898d"} Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.125728 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.126555 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-49a9-account-create-update-fpqm2" event={"ID":"eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb","Type":"ContainerStarted","Data":"d65ca91f700c095370cf5118669f990b2805250b56afc7b52597433b31db82d6"} Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.126588 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-49a9-account-create-update-fpqm2" event={"ID":"eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb","Type":"ContainerStarted","Data":"e9edeac9f1ea44c67b79ef28e7ebe811b9fb474505df316168eebedcd2c754a0"} Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.168355 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdce215-5dd4-4a45-a099-ac2b51edf843-operator-scripts\") pod \"glance-863d-account-create-update-sj894\" (UID: \"bbdce215-5dd4-4a45-a099-ac2b51edf843\") " pod="openstack/glance-863d-account-create-update-sj894" Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.168413 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmrr8\" (UniqueName: \"kubernetes.io/projected/bbdce215-5dd4-4a45-a099-ac2b51edf843-kube-api-access-hmrr8\") pod \"glance-863d-account-create-update-sj894\" (UID: \"bbdce215-5dd4-4a45-a099-ac2b51edf843\") " pod="openstack/glance-863d-account-create-update-sj894" Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.176841 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdce215-5dd4-4a45-a099-ac2b51edf843-operator-scripts\") pod \"glance-863d-account-create-update-sj894\" (UID: \"bbdce215-5dd4-4a45-a099-ac2b51edf843\") " pod="openstack/glance-863d-account-create-update-sj894" Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.396398 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:43 crc kubenswrapper[4820]: E0203 12:28:43.397680 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 12:28:43 crc kubenswrapper[4820]: E0203 12:28:43.432596 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 12:28:43 crc kubenswrapper[4820]: E0203 12:28:43.432692 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift podName:d4eb10ed-a945-4b23-8fb3-62022a90e09f nodeName:}" failed. No retries permitted until 2026-02-03 12:28:45.432667806 +0000 UTC m=+1442.955743670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift") pod "swift-storage-0" (UID: "d4eb10ed-a945-4b23-8fb3-62022a90e09f") : configmap "swift-ring-files" not found Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.420616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmrr8\" (UniqueName: \"kubernetes.io/projected/bbdce215-5dd4-4a45-a099-ac2b51edf843-kube-api-access-hmrr8\") pod \"glance-863d-account-create-update-sj894\" (UID: \"bbdce215-5dd4-4a45-a099-ac2b51edf843\") " pod="openstack/glance-863d-account-create-update-sj894" Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.464715 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-6fs49" podStartSLOduration=3.46468781 podStartE2EDuration="3.46468781s" podCreationTimestamp="2026-02-03 12:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:28:43.365405609 +0000 UTC m=+1440.888481473" watchObservedRunningTime="2026-02-03 12:28:43.46468781 +0000 UTC m=+1440.987763674" Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.505781 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-pslmr"] Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.507479 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-49a9-account-create-update-fpqm2" podStartSLOduration=3.507452035 podStartE2EDuration="3.507452035s" podCreationTimestamp="2026-02-03 12:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:28:43.442681766 +0000 UTC m=+1440.965757630" watchObservedRunningTime="2026-02-03 12:28:43.507452035 +0000 UTC m=+1441.030527899" Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.607487 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:43 crc kubenswrapper[4820]: I0203 12:28:43.629925 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-863d-account-create-update-sj894" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.081278 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfr4s\" (UniqueName: \"kubernetes.io/projected/58891d63-85ed-47b4-be86-ad007e0f1a15-kube-api-access-zfr4s\") pod \"58891d63-85ed-47b4-be86-ad007e0f1a15\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.081366 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-combined-ca-bundle\") pod \"58891d63-85ed-47b4-be86-ad007e0f1a15\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.081475 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-dispersionconf\") pod \"58891d63-85ed-47b4-be86-ad007e0f1a15\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.081529 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/58891d63-85ed-47b4-be86-ad007e0f1a15-etc-swift\") pod \"58891d63-85ed-47b4-be86-ad007e0f1a15\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.081573 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/58891d63-85ed-47b4-be86-ad007e0f1a15-ring-data-devices\") pod \"58891d63-85ed-47b4-be86-ad007e0f1a15\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.081619 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58891d63-85ed-47b4-be86-ad007e0f1a15-scripts\") pod \"58891d63-85ed-47b4-be86-ad007e0f1a15\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.081648 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-swiftconf\") pod \"58891d63-85ed-47b4-be86-ad007e0f1a15\" (UID: \"58891d63-85ed-47b4-be86-ad007e0f1a15\") " Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.086219 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58891d63-85ed-47b4-be86-ad007e0f1a15-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "58891d63-85ed-47b4-be86-ad007e0f1a15" (UID: "58891d63-85ed-47b4-be86-ad007e0f1a15"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.086611 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58891d63-85ed-47b4-be86-ad007e0f1a15-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "58891d63-85ed-47b4-be86-ad007e0f1a15" (UID: "58891d63-85ed-47b4-be86-ad007e0f1a15"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.086705 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58891d63-85ed-47b4-be86-ad007e0f1a15-scripts" (OuterVolumeSpecName: "scripts") pod "58891d63-85ed-47b4-be86-ad007e0f1a15" (UID: "58891d63-85ed-47b4-be86-ad007e0f1a15"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.091772 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "58891d63-85ed-47b4-be86-ad007e0f1a15" (UID: "58891d63-85ed-47b4-be86-ad007e0f1a15"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.094697 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "58891d63-85ed-47b4-be86-ad007e0f1a15" (UID: "58891d63-85ed-47b4-be86-ad007e0f1a15"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.095205 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58891d63-85ed-47b4-be86-ad007e0f1a15-kube-api-access-zfr4s" (OuterVolumeSpecName: "kube-api-access-zfr4s") pod "58891d63-85ed-47b4-be86-ad007e0f1a15" (UID: "58891d63-85ed-47b4-be86-ad007e0f1a15"). InnerVolumeSpecName "kube-api-access-zfr4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.095938 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58891d63-85ed-47b4-be86-ad007e0f1a15" (UID: "58891d63-85ed-47b4-be86-ad007e0f1a15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.117398 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6g6lc"] Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.184610 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58891d63-85ed-47b4-be86-ad007e0f1a15-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.189310 4820 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.189333 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfr4s\" (UniqueName: \"kubernetes.io/projected/58891d63-85ed-47b4-be86-ad007e0f1a15-kube-api-access-zfr4s\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.189350 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.189365 4820 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/58891d63-85ed-47b4-be86-ad007e0f1a15-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.189395 4820 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/58891d63-85ed-47b4-be86-ad007e0f1a15-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.189406 4820 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/58891d63-85ed-47b4-be86-ad007e0f1a15-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.209860 4820 generic.go:334] "Generic (PLEG): container finished" podID="e928b295-8806-4a21-aaf7-d59749562244" containerID="43e1b45809d7f422866afa961cd6e49b33b85adb0eb6b43b3637b1ea4e3a0d81" exitCode=0 Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.209975 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-6fs49" event={"ID":"e928b295-8806-4a21-aaf7-d59749562244","Type":"ContainerDied","Data":"43e1b45809d7f422866afa961cd6e49b33b85adb0eb6b43b3637b1ea4e3a0d81"} Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.229878 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ptbp8"] Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.231451 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ptbp8" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.237850 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.251148 4820 generic.go:334] "Generic (PLEG): container finished" podID="eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb" containerID="d65ca91f700c095370cf5118669f990b2805250b56afc7b52597433b31db82d6" exitCode=0 Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.251371 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-49a9-account-create-update-fpqm2" event={"ID":"eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb","Type":"ContainerDied","Data":"d65ca91f700c095370cf5118669f990b2805250b56afc7b52597433b31db82d6"} Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.260108 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-685tt" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.262730 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pslmr" event={"ID":"94423319-f57f-47dd-80db-db41374dcb25","Type":"ContainerStarted","Data":"7997fd2c5eeed02bd5d71fb5f47d4345b705ee009ae143e78e599dc69db46a5a"} Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.264306 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ptbp8"] Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.295328 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/091fcf4f-9f71-4b4a-92ef-b856a2df672a-operator-scripts\") pod \"root-account-create-update-ptbp8\" (UID: \"091fcf4f-9f71-4b4a-92ef-b856a2df672a\") " pod="openstack/root-account-create-update-ptbp8" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.295599 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb7br\" (UniqueName: \"kubernetes.io/projected/091fcf4f-9f71-4b4a-92ef-b856a2df672a-kube-api-access-qb7br\") pod \"root-account-create-update-ptbp8\" (UID: \"091fcf4f-9f71-4b4a-92ef-b856a2df672a\") " pod="openstack/root-account-create-update-ptbp8" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.376796 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-685tt"] Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.390025 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-685tt"] Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.401168 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/091fcf4f-9f71-4b4a-92ef-b856a2df672a-operator-scripts\") pod \"root-account-create-update-ptbp8\" (UID: \"091fcf4f-9f71-4b4a-92ef-b856a2df672a\") " pod="openstack/root-account-create-update-ptbp8" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.401319 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb7br\" (UniqueName: \"kubernetes.io/projected/091fcf4f-9f71-4b4a-92ef-b856a2df672a-kube-api-access-qb7br\") pod \"root-account-create-update-ptbp8\" (UID: \"091fcf4f-9f71-4b4a-92ef-b856a2df672a\") " pod="openstack/root-account-create-update-ptbp8" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.403097 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/091fcf4f-9f71-4b4a-92ef-b856a2df672a-operator-scripts\") pod \"root-account-create-update-ptbp8\" (UID: \"091fcf4f-9f71-4b4a-92ef-b856a2df672a\") " pod="openstack/root-account-create-update-ptbp8" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.425114 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb7br\" (UniqueName: \"kubernetes.io/projected/091fcf4f-9f71-4b4a-92ef-b856a2df672a-kube-api-access-qb7br\") pod \"root-account-create-update-ptbp8\" (UID: \"091fcf4f-9f71-4b4a-92ef-b856a2df672a\") " pod="openstack/root-account-create-update-ptbp8" Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.582340 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-863d-account-create-update-sj894"] Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.588597 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ptbp8" Feb 03 12:28:44 crc kubenswrapper[4820]: W0203 12:28:44.610282 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbdce215_5dd4_4a45_a099_ac2b51edf843.slice/crio-62d32b8d9b7dd6458fc2e502029827e2cfc7bde3bd7ac01409253d64a484032b WatchSource:0}: Error finding container 62d32b8d9b7dd6458fc2e502029827e2cfc7bde3bd7ac01409253d64a484032b: Status 404 returned error can't find the container with id 62d32b8d9b7dd6458fc2e502029827e2cfc7bde3bd7ac01409253d64a484032b Feb 03 12:28:44 crc kubenswrapper[4820]: I0203 12:28:44.629965 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 03 12:28:45 crc kubenswrapper[4820]: I0203 12:28:45.125488 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ptbp8"] Feb 03 12:28:45 crc kubenswrapper[4820]: I0203 12:28:45.155642 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58891d63-85ed-47b4-be86-ad007e0f1a15" path="/var/lib/kubelet/pods/58891d63-85ed-47b4-be86-ad007e0f1a15/volumes" Feb 03 12:28:45 crc kubenswrapper[4820]: I0203 12:28:45.275912 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zbjrj" event={"ID":"60e67a4a-d840-4bc2-9f74-4a5fbb36a829","Type":"ContainerStarted","Data":"698433907a92a5ca9104a622432656cc6950a420156cd457563a40aab7b4ca99"} Feb 03 12:28:45 crc kubenswrapper[4820]: I0203 12:28:45.276093 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:45 crc kubenswrapper[4820]: I0203 12:28:45.278687 4820 generic.go:334] "Generic (PLEG): container finished" podID="bbdce215-5dd4-4a45-a099-ac2b51edf843" containerID="b170f9e5a1a3a837671d75b70c9f289a68468a57e434be4bccc0cfa39c9c916b" exitCode=0 Feb 03 12:28:45 crc kubenswrapper[4820]: I0203 12:28:45.278759 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-863d-account-create-update-sj894" event={"ID":"bbdce215-5dd4-4a45-a099-ac2b51edf843","Type":"ContainerDied","Data":"b170f9e5a1a3a837671d75b70c9f289a68468a57e434be4bccc0cfa39c9c916b"} Feb 03 12:28:45 crc kubenswrapper[4820]: I0203 12:28:45.278951 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-863d-account-create-update-sj894" event={"ID":"bbdce215-5dd4-4a45-a099-ac2b51edf843","Type":"ContainerStarted","Data":"62d32b8d9b7dd6458fc2e502029827e2cfc7bde3bd7ac01409253d64a484032b"} Feb 03 12:28:45 crc kubenswrapper[4820]: I0203 12:28:45.281257 4820 generic.go:334] "Generic (PLEG): container finished" podID="1336b61c-ed56-40f3-b2cd-1d476b33459b" containerID="d55ffb6b15094fbc988abed6f57d4a1ed290a6ca4b32e5599918774b9cb47431" exitCode=0 Feb 03 12:28:45 crc kubenswrapper[4820]: I0203 12:28:45.281341 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6g6lc" event={"ID":"1336b61c-ed56-40f3-b2cd-1d476b33459b","Type":"ContainerDied","Data":"d55ffb6b15094fbc988abed6f57d4a1ed290a6ca4b32e5599918774b9cb47431"} Feb 03 12:28:45 crc kubenswrapper[4820]: I0203 12:28:45.281375 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6g6lc" event={"ID":"1336b61c-ed56-40f3-b2cd-1d476b33459b","Type":"ContainerStarted","Data":"04144240d3969331015122ae034e42f79b77cd37d65a90f9c987370b96e67f15"} Feb 03 12:28:45 crc kubenswrapper[4820]: I0203 12:28:45.311075 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-zbjrj" podStartSLOduration=5.311048169 podStartE2EDuration="5.311048169s" podCreationTimestamp="2026-02-03 12:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:28:45.295226032 +0000 UTC m=+1442.818301896" watchObservedRunningTime="2026-02-03 12:28:45.311048169 +0000 UTC m=+1442.834124033" Feb 03 12:28:45 crc kubenswrapper[4820]: I0203 12:28:45.528933 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:45 crc kubenswrapper[4820]: E0203 12:28:45.529304 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 12:28:45 crc kubenswrapper[4820]: E0203 12:28:45.529324 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 12:28:45 crc kubenswrapper[4820]: E0203 12:28:45.529386 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift podName:d4eb10ed-a945-4b23-8fb3-62022a90e09f nodeName:}" failed. No retries permitted until 2026-02-03 12:28:49.529366925 +0000 UTC m=+1447.052442789 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift") pod "swift-storage-0" (UID: "d4eb10ed-a945-4b23-8fb3-62022a90e09f") : configmap "swift-ring-files" not found Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.074241 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-shb82"] Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.075843 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-shb82" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.088015 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk8sj\" (UniqueName: \"kubernetes.io/projected/32fc4e30-d6f9-431f-a147-b54659c292f4-kube-api-access-kk8sj\") pod \"keystone-db-create-shb82\" (UID: \"32fc4e30-d6f9-431f-a147-b54659c292f4\") " pod="openstack/keystone-db-create-shb82" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.088094 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32fc4e30-d6f9-431f-a147-b54659c292f4-operator-scripts\") pod \"keystone-db-create-shb82\" (UID: \"32fc4e30-d6f9-431f-a147-b54659c292f4\") " pod="openstack/keystone-db-create-shb82" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.090931 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-shb82"] Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.191624 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk8sj\" (UniqueName: \"kubernetes.io/projected/32fc4e30-d6f9-431f-a147-b54659c292f4-kube-api-access-kk8sj\") pod \"keystone-db-create-shb82\" (UID: \"32fc4e30-d6f9-431f-a147-b54659c292f4\") " pod="openstack/keystone-db-create-shb82" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.191979 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32fc4e30-d6f9-431f-a147-b54659c292f4-operator-scripts\") pod \"keystone-db-create-shb82\" (UID: \"32fc4e30-d6f9-431f-a147-b54659c292f4\") " pod="openstack/keystone-db-create-shb82" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.193769 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32fc4e30-d6f9-431f-a147-b54659c292f4-operator-scripts\") pod \"keystone-db-create-shb82\" (UID: \"32fc4e30-d6f9-431f-a147-b54659c292f4\") " pod="openstack/keystone-db-create-shb82" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.219961 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk8sj\" (UniqueName: \"kubernetes.io/projected/32fc4e30-d6f9-431f-a147-b54659c292f4-kube-api-access-kk8sj\") pod \"keystone-db-create-shb82\" (UID: \"32fc4e30-d6f9-431f-a147-b54659c292f4\") " pod="openstack/keystone-db-create-shb82" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.401013 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-shb82" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.483358 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ec37-account-create-update-9qk8m"] Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.486266 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec37-account-create-update-9qk8m" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.488373 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.503131 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343cdd64-3829-4d0b-bbac-d220e5442ee0-operator-scripts\") pod \"placement-ec37-account-create-update-9qk8m\" (UID: \"343cdd64-3829-4d0b-bbac-d220e5442ee0\") " pod="openstack/placement-ec37-account-create-update-9qk8m" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.503280 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp5jd\" (UniqueName: \"kubernetes.io/projected/343cdd64-3829-4d0b-bbac-d220e5442ee0-kube-api-access-pp5jd\") pod \"placement-ec37-account-create-update-9qk8m\" (UID: \"343cdd64-3829-4d0b-bbac-d220e5442ee0\") " pod="openstack/placement-ec37-account-create-update-9qk8m" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.508587 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ec37-account-create-update-9qk8m"] Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.565951 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-97lpw"] Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.567720 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-97lpw" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.572803 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-97lpw"] Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.605626 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp5jd\" (UniqueName: \"kubernetes.io/projected/343cdd64-3829-4d0b-bbac-d220e5442ee0-kube-api-access-pp5jd\") pod \"placement-ec37-account-create-update-9qk8m\" (UID: \"343cdd64-3829-4d0b-bbac-d220e5442ee0\") " pod="openstack/placement-ec37-account-create-update-9qk8m" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.605802 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343cdd64-3829-4d0b-bbac-d220e5442ee0-operator-scripts\") pod \"placement-ec37-account-create-update-9qk8m\" (UID: \"343cdd64-3829-4d0b-bbac-d220e5442ee0\") " pod="openstack/placement-ec37-account-create-update-9qk8m" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.605859 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpkqd\" (UniqueName: \"kubernetes.io/projected/fea163e7-ea8b-4888-8634-18323a2dfc2d-kube-api-access-cpkqd\") pod \"placement-db-create-97lpw\" (UID: \"fea163e7-ea8b-4888-8634-18323a2dfc2d\") " pod="openstack/placement-db-create-97lpw" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.606654 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343cdd64-3829-4d0b-bbac-d220e5442ee0-operator-scripts\") pod \"placement-ec37-account-create-update-9qk8m\" (UID: \"343cdd64-3829-4d0b-bbac-d220e5442ee0\") " pod="openstack/placement-ec37-account-create-update-9qk8m" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.606717 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea163e7-ea8b-4888-8634-18323a2dfc2d-operator-scripts\") pod \"placement-db-create-97lpw\" (UID: \"fea163e7-ea8b-4888-8634-18323a2dfc2d\") " pod="openstack/placement-db-create-97lpw" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.627381 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp5jd\" (UniqueName: \"kubernetes.io/projected/343cdd64-3829-4d0b-bbac-d220e5442ee0-kube-api-access-pp5jd\") pod \"placement-ec37-account-create-update-9qk8m\" (UID: \"343cdd64-3829-4d0b-bbac-d220e5442ee0\") " pod="openstack/placement-ec37-account-create-update-9qk8m" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.708867 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea163e7-ea8b-4888-8634-18323a2dfc2d-operator-scripts\") pod \"placement-db-create-97lpw\" (UID: \"fea163e7-ea8b-4888-8634-18323a2dfc2d\") " pod="openstack/placement-db-create-97lpw" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.709188 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpkqd\" (UniqueName: \"kubernetes.io/projected/fea163e7-ea8b-4888-8634-18323a2dfc2d-kube-api-access-cpkqd\") pod \"placement-db-create-97lpw\" (UID: \"fea163e7-ea8b-4888-8634-18323a2dfc2d\") " pod="openstack/placement-db-create-97lpw" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.710265 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea163e7-ea8b-4888-8634-18323a2dfc2d-operator-scripts\") pod \"placement-db-create-97lpw\" (UID: \"fea163e7-ea8b-4888-8634-18323a2dfc2d\") " pod="openstack/placement-db-create-97lpw" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.727544 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpkqd\" (UniqueName: \"kubernetes.io/projected/fea163e7-ea8b-4888-8634-18323a2dfc2d-kube-api-access-cpkqd\") pod \"placement-db-create-97lpw\" (UID: \"fea163e7-ea8b-4888-8634-18323a2dfc2d\") " pod="openstack/placement-db-create-97lpw" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.815587 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec37-account-create-update-9qk8m" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.888603 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-97lpw" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.898096 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-119e-account-create-update-gctg8"] Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.899581 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-119e-account-create-update-gctg8" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.903403 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.914702 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcnbt\" (UniqueName: \"kubernetes.io/projected/ab7fb74b-aa61-420d-b013-f663b159cf8b-kube-api-access-rcnbt\") pod \"keystone-119e-account-create-update-gctg8\" (UID: \"ab7fb74b-aa61-420d-b013-f663b159cf8b\") " pod="openstack/keystone-119e-account-create-update-gctg8" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.914850 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7fb74b-aa61-420d-b013-f663b159cf8b-operator-scripts\") pod \"keystone-119e-account-create-update-gctg8\" (UID: \"ab7fb74b-aa61-420d-b013-f663b159cf8b\") " pod="openstack/keystone-119e-account-create-update-gctg8" Feb 03 12:28:46 crc kubenswrapper[4820]: I0203 12:28:46.925713 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-119e-account-create-update-gctg8"] Feb 03 12:28:47 crc kubenswrapper[4820]: I0203 12:28:47.027373 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcnbt\" (UniqueName: \"kubernetes.io/projected/ab7fb74b-aa61-420d-b013-f663b159cf8b-kube-api-access-rcnbt\") pod \"keystone-119e-account-create-update-gctg8\" (UID: \"ab7fb74b-aa61-420d-b013-f663b159cf8b\") " pod="openstack/keystone-119e-account-create-update-gctg8" Feb 03 12:28:47 crc kubenswrapper[4820]: I0203 12:28:47.027527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7fb74b-aa61-420d-b013-f663b159cf8b-operator-scripts\") pod \"keystone-119e-account-create-update-gctg8\" (UID: \"ab7fb74b-aa61-420d-b013-f663b159cf8b\") " pod="openstack/keystone-119e-account-create-update-gctg8" Feb 03 12:28:47 crc kubenswrapper[4820]: I0203 12:28:47.028566 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7fb74b-aa61-420d-b013-f663b159cf8b-operator-scripts\") pod \"keystone-119e-account-create-update-gctg8\" (UID: \"ab7fb74b-aa61-420d-b013-f663b159cf8b\") " pod="openstack/keystone-119e-account-create-update-gctg8" Feb 03 12:28:47 crc kubenswrapper[4820]: I0203 12:28:47.052361 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcnbt\" (UniqueName: \"kubernetes.io/projected/ab7fb74b-aa61-420d-b013-f663b159cf8b-kube-api-access-rcnbt\") pod \"keystone-119e-account-create-update-gctg8\" (UID: \"ab7fb74b-aa61-420d-b013-f663b159cf8b\") " pod="openstack/keystone-119e-account-create-update-gctg8" Feb 03 12:28:47 crc kubenswrapper[4820]: I0203 12:28:47.227421 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-119e-account-create-update-gctg8" Feb 03 12:28:47 crc kubenswrapper[4820]: I0203 12:28:47.431216 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 03 12:28:48 crc kubenswrapper[4820]: W0203 12:28:48.864358 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod091fcf4f_9f71_4b4a_92ef_b856a2df672a.slice/crio-79daef99490811fff227bd2bed2848dff10635eb9f085652174420a0dab47d13 WatchSource:0}: Error finding container 79daef99490811fff227bd2bed2848dff10635eb9f085652174420a0dab47d13: Status 404 returned error can't find the container with id 79daef99490811fff227bd2bed2848dff10635eb9f085652174420a0dab47d13 Feb 03 12:28:48 crc kubenswrapper[4820]: I0203 12:28:48.945769 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-49a9-account-create-update-fpqm2" Feb 03 12:28:48 crc kubenswrapper[4820]: I0203 12:28:48.954339 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-863d-account-create-update-sj894" Feb 03 12:28:48 crc kubenswrapper[4820]: I0203 12:28:48.966086 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdce215-5dd4-4a45-a099-ac2b51edf843-operator-scripts\") pod \"bbdce215-5dd4-4a45-a099-ac2b51edf843\" (UID: \"bbdce215-5dd4-4a45-a099-ac2b51edf843\") " Feb 03 12:28:48 crc kubenswrapper[4820]: I0203 12:28:48.966319 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmrr8\" (UniqueName: \"kubernetes.io/projected/bbdce215-5dd4-4a45-a099-ac2b51edf843-kube-api-access-hmrr8\") pod \"bbdce215-5dd4-4a45-a099-ac2b51edf843\" (UID: \"bbdce215-5dd4-4a45-a099-ac2b51edf843\") " Feb 03 12:28:48 crc kubenswrapper[4820]: I0203 12:28:48.966372 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjgrs\" (UniqueName: \"kubernetes.io/projected/eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb-kube-api-access-bjgrs\") pod \"eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb\" (UID: \"eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb\") " Feb 03 12:28:48 crc kubenswrapper[4820]: I0203 12:28:48.966427 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb-operator-scripts\") pod \"eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb\" (UID: \"eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb\") " Feb 03 12:28:48 crc kubenswrapper[4820]: I0203 12:28:48.966971 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbdce215-5dd4-4a45-a099-ac2b51edf843-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bbdce215-5dd4-4a45-a099-ac2b51edf843" (UID: "bbdce215-5dd4-4a45-a099-ac2b51edf843"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:48 crc kubenswrapper[4820]: I0203 12:28:48.971870 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb" (UID: "eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:48 crc kubenswrapper[4820]: I0203 12:28:48.988513 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-6fs49" Feb 03 12:28:48 crc kubenswrapper[4820]: I0203 12:28:48.996512 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbdce215-5dd4-4a45-a099-ac2b51edf843-kube-api-access-hmrr8" (OuterVolumeSpecName: "kube-api-access-hmrr8") pod "bbdce215-5dd4-4a45-a099-ac2b51edf843" (UID: "bbdce215-5dd4-4a45-a099-ac2b51edf843"). InnerVolumeSpecName "kube-api-access-hmrr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.015608 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb-kube-api-access-bjgrs" (OuterVolumeSpecName: "kube-api-access-bjgrs") pod "eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb" (UID: "eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb"). InnerVolumeSpecName "kube-api-access-bjgrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.077347 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msqn5\" (UniqueName: \"kubernetes.io/projected/e928b295-8806-4a21-aaf7-d59749562244-kube-api-access-msqn5\") pod \"e928b295-8806-4a21-aaf7-d59749562244\" (UID: \"e928b295-8806-4a21-aaf7-d59749562244\") " Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.077945 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e928b295-8806-4a21-aaf7-d59749562244-operator-scripts\") pod \"e928b295-8806-4a21-aaf7-d59749562244\" (UID: \"e928b295-8806-4a21-aaf7-d59749562244\") " Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.079683 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e928b295-8806-4a21-aaf7-d59749562244-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e928b295-8806-4a21-aaf7-d59749562244" (UID: "e928b295-8806-4a21-aaf7-d59749562244"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.081213 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e928b295-8806-4a21-aaf7-d59749562244-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.081258 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmrr8\" (UniqueName: \"kubernetes.io/projected/bbdce215-5dd4-4a45-a099-ac2b51edf843-kube-api-access-hmrr8\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.081275 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjgrs\" (UniqueName: \"kubernetes.io/projected/eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb-kube-api-access-bjgrs\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.081296 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.081310 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdce215-5dd4-4a45-a099-ac2b51edf843-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.081433 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e928b295-8806-4a21-aaf7-d59749562244-kube-api-access-msqn5" (OuterVolumeSpecName: "kube-api-access-msqn5") pod "e928b295-8806-4a21-aaf7-d59749562244" (UID: "e928b295-8806-4a21-aaf7-d59749562244"). InnerVolumeSpecName "kube-api-access-msqn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.183933 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msqn5\" (UniqueName: \"kubernetes.io/projected/e928b295-8806-4a21-aaf7-d59749562244-kube-api-access-msqn5\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.324800 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-49a9-account-create-update-fpqm2" event={"ID":"eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb","Type":"ContainerDied","Data":"e9edeac9f1ea44c67b79ef28e7ebe811b9fb474505df316168eebedcd2c754a0"} Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.324876 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9edeac9f1ea44c67b79ef28e7ebe811b9fb474505df316168eebedcd2c754a0" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.324992 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-49a9-account-create-update-fpqm2" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.328648 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ptbp8" event={"ID":"091fcf4f-9f71-4b4a-92ef-b856a2df672a","Type":"ContainerStarted","Data":"79daef99490811fff227bd2bed2848dff10635eb9f085652174420a0dab47d13"} Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.330525 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-6fs49" event={"ID":"e928b295-8806-4a21-aaf7-d59749562244","Type":"ContainerDied","Data":"8de3264001ff0adc48904512f0f72e57507daf4441de47dae90c3497a82d898d"} Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.330563 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8de3264001ff0adc48904512f0f72e57507daf4441de47dae90c3497a82d898d" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.330652 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-6fs49" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.332871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-863d-account-create-update-sj894" event={"ID":"bbdce215-5dd4-4a45-a099-ac2b51edf843","Type":"ContainerDied","Data":"62d32b8d9b7dd6458fc2e502029827e2cfc7bde3bd7ac01409253d64a484032b"} Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.332921 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62d32b8d9b7dd6458fc2e502029827e2cfc7bde3bd7ac01409253d64a484032b" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.332996 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-863d-account-create-update-sj894" Feb 03 12:28:49 crc kubenswrapper[4820]: I0203 12:28:49.600150 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:49 crc kubenswrapper[4820]: E0203 12:28:49.600407 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 12:28:49 crc kubenswrapper[4820]: E0203 12:28:49.600545 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 12:28:49 crc kubenswrapper[4820]: E0203 12:28:49.600628 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift podName:d4eb10ed-a945-4b23-8fb3-62022a90e09f nodeName:}" failed. No retries permitted until 2026-02-03 12:28:57.600601634 +0000 UTC m=+1455.123677498 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift") pod "swift-storage-0" (UID: "d4eb10ed-a945-4b23-8fb3-62022a90e09f") : configmap "swift-ring-files" not found Feb 03 12:28:50 crc kubenswrapper[4820]: I0203 12:28:50.117223 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 03 12:28:50 crc kubenswrapper[4820]: I0203 12:28:50.792555 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:28:50 crc kubenswrapper[4820]: I0203 12:28:50.867316 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-r25hq"] Feb 03 12:28:50 crc kubenswrapper[4820]: I0203 12:28:50.867615 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" podUID="4f272940-99d0-44a5-b16c-73b2b4f17bba" containerName="dnsmasq-dns" containerID="cri-o://79f1610b2a1f0aff89134774b059753aa69330068731a2fb3f06b79c922fa21d" gracePeriod=10 Feb 03 12:28:51 crc kubenswrapper[4820]: I0203 12:28:51.146481 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" podUID="4f272940-99d0-44a5-b16c-73b2b4f17bba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Feb 03 12:28:51 crc kubenswrapper[4820]: I0203 12:28:51.634999 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6g6lc" Feb 03 12:28:51 crc kubenswrapper[4820]: I0203 12:28:51.769700 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1336b61c-ed56-40f3-b2cd-1d476b33459b-operator-scripts\") pod \"1336b61c-ed56-40f3-b2cd-1d476b33459b\" (UID: \"1336b61c-ed56-40f3-b2cd-1d476b33459b\") " Feb 03 12:28:51 crc kubenswrapper[4820]: I0203 12:28:51.769778 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss8rr\" (UniqueName: \"kubernetes.io/projected/1336b61c-ed56-40f3-b2cd-1d476b33459b-kube-api-access-ss8rr\") pod \"1336b61c-ed56-40f3-b2cd-1d476b33459b\" (UID: \"1336b61c-ed56-40f3-b2cd-1d476b33459b\") " Feb 03 12:28:51 crc kubenswrapper[4820]: I0203 12:28:51.770962 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1336b61c-ed56-40f3-b2cd-1d476b33459b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1336b61c-ed56-40f3-b2cd-1d476b33459b" (UID: "1336b61c-ed56-40f3-b2cd-1d476b33459b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:51 crc kubenswrapper[4820]: I0203 12:28:51.789111 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1336b61c-ed56-40f3-b2cd-1d476b33459b-kube-api-access-ss8rr" (OuterVolumeSpecName: "kube-api-access-ss8rr") pod "1336b61c-ed56-40f3-b2cd-1d476b33459b" (UID: "1336b61c-ed56-40f3-b2cd-1d476b33459b"). InnerVolumeSpecName "kube-api-access-ss8rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:51 crc kubenswrapper[4820]: I0203 12:28:51.871909 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1336b61c-ed56-40f3-b2cd-1d476b33459b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:51 crc kubenswrapper[4820]: I0203 12:28:51.872241 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss8rr\" (UniqueName: \"kubernetes.io/projected/1336b61c-ed56-40f3-b2cd-1d476b33459b-kube-api-access-ss8rr\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:52 crc kubenswrapper[4820]: I0203 12:28:52.378967 4820 generic.go:334] "Generic (PLEG): container finished" podID="4f272940-99d0-44a5-b16c-73b2b4f17bba" containerID="79f1610b2a1f0aff89134774b059753aa69330068731a2fb3f06b79c922fa21d" exitCode=0 Feb 03 12:28:52 crc kubenswrapper[4820]: I0203 12:28:52.379095 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" event={"ID":"4f272940-99d0-44a5-b16c-73b2b4f17bba","Type":"ContainerDied","Data":"79f1610b2a1f0aff89134774b059753aa69330068731a2fb3f06b79c922fa21d"} Feb 03 12:28:52 crc kubenswrapper[4820]: I0203 12:28:52.381262 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6g6lc" event={"ID":"1336b61c-ed56-40f3-b2cd-1d476b33459b","Type":"ContainerDied","Data":"04144240d3969331015122ae034e42f79b77cd37d65a90f9c987370b96e67f15"} Feb 03 12:28:52 crc kubenswrapper[4820]: I0203 12:28:52.381295 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04144240d3969331015122ae034e42f79b77cd37d65a90f9c987370b96e67f15" Feb 03 12:28:52 crc kubenswrapper[4820]: I0203 12:28:52.381382 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6g6lc" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.401658 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-zlksq"] Feb 03 12:28:53 crc kubenswrapper[4820]: E0203 12:28:53.404119 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb" containerName="mariadb-account-create-update" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.404651 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb" containerName="mariadb-account-create-update" Feb 03 12:28:53 crc kubenswrapper[4820]: E0203 12:28:53.404670 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbdce215-5dd4-4a45-a099-ac2b51edf843" containerName="mariadb-account-create-update" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.404678 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbdce215-5dd4-4a45-a099-ac2b51edf843" containerName="mariadb-account-create-update" Feb 03 12:28:53 crc kubenswrapper[4820]: E0203 12:28:53.404838 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e928b295-8806-4a21-aaf7-d59749562244" containerName="mariadb-database-create" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.404850 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e928b295-8806-4a21-aaf7-d59749562244" containerName="mariadb-database-create" Feb 03 12:28:53 crc kubenswrapper[4820]: E0203 12:28:53.404873 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1336b61c-ed56-40f3-b2cd-1d476b33459b" containerName="mariadb-database-create" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.404916 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1336b61c-ed56-40f3-b2cd-1d476b33459b" containerName="mariadb-database-create" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.405423 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e928b295-8806-4a21-aaf7-d59749562244" containerName="mariadb-database-create" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.405440 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbdce215-5dd4-4a45-a099-ac2b51edf843" containerName="mariadb-account-create-update" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.405561 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb" containerName="mariadb-account-create-update" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.405576 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1336b61c-ed56-40f3-b2cd-1d476b33459b" containerName="mariadb-database-create" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.407480 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zlksq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.410325 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rp7hq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.412313 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zlksq"] Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.412454 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.421171 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" event={"ID":"4f272940-99d0-44a5-b16c-73b2b4f17bba","Type":"ContainerDied","Data":"bd6342753f672c2365ee01a2bca696cf9cc221410684669169cd63e2a8ae546e"} Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.421222 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6342753f672c2365ee01a2bca696cf9cc221410684669169cd63e2a8ae546e" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.446334 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpj24\" (UniqueName: \"kubernetes.io/projected/b897af0d-2b67-45c6-b17f-3686d5a419c0-kube-api-access-vpj24\") pod \"glance-db-sync-zlksq\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " pod="openstack/glance-db-sync-zlksq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.446839 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-db-sync-config-data\") pod \"glance-db-sync-zlksq\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " pod="openstack/glance-db-sync-zlksq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.446981 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-config-data\") pod \"glance-db-sync-zlksq\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " pod="openstack/glance-db-sync-zlksq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.447034 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-combined-ca-bundle\") pod \"glance-db-sync-zlksq\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " pod="openstack/glance-db-sync-zlksq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.672067 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpj24\" (UniqueName: \"kubernetes.io/projected/b897af0d-2b67-45c6-b17f-3686d5a419c0-kube-api-access-vpj24\") pod \"glance-db-sync-zlksq\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " pod="openstack/glance-db-sync-zlksq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.672653 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-db-sync-config-data\") pod \"glance-db-sync-zlksq\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " pod="openstack/glance-db-sync-zlksq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.672706 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-config-data\") pod \"glance-db-sync-zlksq\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " pod="openstack/glance-db-sync-zlksq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.672740 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-combined-ca-bundle\") pod \"glance-db-sync-zlksq\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " pod="openstack/glance-db-sync-zlksq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.688525 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-db-sync-config-data\") pod \"glance-db-sync-zlksq\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " pod="openstack/glance-db-sync-zlksq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.691067 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-config-data\") pod \"glance-db-sync-zlksq\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " pod="openstack/glance-db-sync-zlksq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.692525 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-combined-ca-bundle\") pod \"glance-db-sync-zlksq\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " pod="openstack/glance-db-sync-zlksq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.717390 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.727086 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpj24\" (UniqueName: \"kubernetes.io/projected/b897af0d-2b67-45c6-b17f-3686d5a419c0-kube-api-access-vpj24\") pod \"glance-db-sync-zlksq\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " pod="openstack/glance-db-sync-zlksq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.778158 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-dns-svc\") pod \"4f272940-99d0-44a5-b16c-73b2b4f17bba\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.778234 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-config\") pod \"4f272940-99d0-44a5-b16c-73b2b4f17bba\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.778293 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-ovsdbserver-sb\") pod \"4f272940-99d0-44a5-b16c-73b2b4f17bba\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.778331 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-ovsdbserver-nb\") pod \"4f272940-99d0-44a5-b16c-73b2b4f17bba\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.778443 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnqht\" (UniqueName: \"kubernetes.io/projected/4f272940-99d0-44a5-b16c-73b2b4f17bba-kube-api-access-bnqht\") pod \"4f272940-99d0-44a5-b16c-73b2b4f17bba\" (UID: \"4f272940-99d0-44a5-b16c-73b2b4f17bba\") " Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.806040 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f272940-99d0-44a5-b16c-73b2b4f17bba-kube-api-access-bnqht" (OuterVolumeSpecName: "kube-api-access-bnqht") pod "4f272940-99d0-44a5-b16c-73b2b4f17bba" (UID: "4f272940-99d0-44a5-b16c-73b2b4f17bba"). InnerVolumeSpecName "kube-api-access-bnqht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.828098 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zlksq" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.882579 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnqht\" (UniqueName: \"kubernetes.io/projected/4f272940-99d0-44a5-b16c-73b2b4f17bba-kube-api-access-bnqht\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.965667 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4f272940-99d0-44a5-b16c-73b2b4f17bba" (UID: "4f272940-99d0-44a5-b16c-73b2b4f17bba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.965703 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-config" (OuterVolumeSpecName: "config") pod "4f272940-99d0-44a5-b16c-73b2b4f17bba" (UID: "4f272940-99d0-44a5-b16c-73b2b4f17bba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.971909 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f272940-99d0-44a5-b16c-73b2b4f17bba" (UID: "4f272940-99d0-44a5-b16c-73b2b4f17bba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.990212 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.990252 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:53 crc kubenswrapper[4820]: I0203 12:28:53.990265 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.008411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4f272940-99d0-44a5-b16c-73b2b4f17bba" (UID: "4f272940-99d0-44a5-b16c-73b2b4f17bba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.070058 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-119e-account-create-update-gctg8"] Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.091615 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f272940-99d0-44a5-b16c-73b2b4f17bba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.333304 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.457323 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-119e-account-create-update-gctg8" event={"ID":"ab7fb74b-aa61-420d-b013-f663b159cf8b","Type":"ContainerStarted","Data":"f3b97011600bc8093340f66a3e94591dbe76e043a3226995eb789b0184ba1ef8"} Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.489328 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ptbp8" event={"ID":"091fcf4f-9f71-4b4a-92ef-b856a2df672a","Type":"ContainerStarted","Data":"2f8b0bda5672c4fb02adb6f8a6223f20960d97b21d6cac3eda7f9992132626c9"} Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.490512 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-97lpw"] Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.504447 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec37-account-create-update-9qk8m" event={"ID":"343cdd64-3829-4d0b-bbac-d220e5442ee0","Type":"ContainerStarted","Data":"96b2c444d0b9be21d1627fb563cf381c65cef3408d5f807e714b7235927d61cb"} Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.506904 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-97lpw" event={"ID":"fea163e7-ea8b-4888-8634-18323a2dfc2d","Type":"ContainerStarted","Data":"18b8e6fe41ff8a2e4c1406aa08364d07d893c3b149f7f6ab70c7a247ea61ff27"} Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.517264 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b4c739c-d87f-478c-aec7-07a49da53d46","Type":"ContainerStarted","Data":"5a8edec5e62c8c83f4c0a9e78ca21c0e03b70b98b2306d94b41f019541c2c591"} Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.520712 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-r25hq" Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.521246 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pslmr" event={"ID":"94423319-f57f-47dd-80db-db41374dcb25","Type":"ContainerStarted","Data":"999c06d20e5fa8c1348b8eda49de76849a801f9e4d5b38b29fc594b89b2c9015"} Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.538614 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ec37-account-create-update-9qk8m"] Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.540704 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-ptbp8" podStartSLOduration=10.540689857 podStartE2EDuration="10.540689857s" podCreationTimestamp="2026-02-03 12:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:28:54.529858774 +0000 UTC m=+1452.052934648" watchObservedRunningTime="2026-02-03 12:28:54.540689857 +0000 UTC m=+1452.063765721" Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.587828 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-pslmr" podStartSLOduration=2.712204518 podStartE2EDuration="12.587755198s" podCreationTimestamp="2026-02-03 12:28:42 +0000 UTC" firstStartedPulling="2026-02-03 12:28:43.473624191 +0000 UTC m=+1440.996700055" lastFinishedPulling="2026-02-03 12:28:53.349174871 +0000 UTC m=+1450.872250735" observedRunningTime="2026-02-03 12:28:54.577746528 +0000 UTC m=+1452.100822412" watchObservedRunningTime="2026-02-03 12:28:54.587755198 +0000 UTC m=+1452.110831062" Feb 03 12:28:54 crc kubenswrapper[4820]: I0203 12:28:54.616435 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-shb82"] Feb 03 12:28:55 crc kubenswrapper[4820]: I0203 12:28:55.204406 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-r25hq"] Feb 03 12:28:55 crc kubenswrapper[4820]: I0203 12:28:55.292478 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-r25hq"] Feb 03 12:28:55 crc kubenswrapper[4820]: I0203 12:28:55.317528 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zlksq"] Feb 03 12:28:55 crc kubenswrapper[4820]: W0203 12:28:55.326348 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb897af0d_2b67_45c6_b17f_3686d5a419c0.slice/crio-970a95b7943ff4e2dc8c19a3bee1aedd4a2f3157d23140bac9eef67a33476fdf WatchSource:0}: Error finding container 970a95b7943ff4e2dc8c19a3bee1aedd4a2f3157d23140bac9eef67a33476fdf: Status 404 returned error can't find the container with id 970a95b7943ff4e2dc8c19a3bee1aedd4a2f3157d23140bac9eef67a33476fdf Feb 03 12:28:55 crc kubenswrapper[4820]: I0203 12:28:55.536502 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-119e-account-create-update-gctg8" event={"ID":"ab7fb74b-aa61-420d-b013-f663b159cf8b","Type":"ContainerStarted","Data":"08da51c59a4a0840a2624deec089ea2c857bcd5d44291be7cd6ad4f51bc9054c"} Feb 03 12:28:55 crc kubenswrapper[4820]: I0203 12:28:55.543436 4820 generic.go:334] "Generic (PLEG): container finished" podID="091fcf4f-9f71-4b4a-92ef-b856a2df672a" containerID="2f8b0bda5672c4fb02adb6f8a6223f20960d97b21d6cac3eda7f9992132626c9" exitCode=0 Feb 03 12:28:55 crc kubenswrapper[4820]: I0203 12:28:55.543519 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ptbp8" event={"ID":"091fcf4f-9f71-4b4a-92ef-b856a2df672a","Type":"ContainerDied","Data":"2f8b0bda5672c4fb02adb6f8a6223f20960d97b21d6cac3eda7f9992132626c9"} Feb 03 12:28:55 crc kubenswrapper[4820]: I0203 12:28:55.548756 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec37-account-create-update-9qk8m" event={"ID":"343cdd64-3829-4d0b-bbac-d220e5442ee0","Type":"ContainerStarted","Data":"8674220873c0d60a64cf2e9ce9f44eb5bf1ab3d9ea0043e909c64e75272b07cc"} Feb 03 12:28:55 crc kubenswrapper[4820]: I0203 12:28:55.553461 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-shb82" event={"ID":"32fc4e30-d6f9-431f-a147-b54659c292f4","Type":"ContainerStarted","Data":"93ead090a8653d9ab82ded32d8eb3d77fab8ab0ee9ffa5746d05c40c40fe3593"} Feb 03 12:28:55 crc kubenswrapper[4820]: I0203 12:28:55.553520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-shb82" event={"ID":"32fc4e30-d6f9-431f-a147-b54659c292f4","Type":"ContainerStarted","Data":"4384d22feb0517b8b43ee79a700c8a4bf5e5dce0ba6c6bfea75db80b9147a27d"} Feb 03 12:28:55 crc kubenswrapper[4820]: I0203 12:28:55.560394 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zlksq" event={"ID":"b897af0d-2b67-45c6-b17f-3686d5a419c0","Type":"ContainerStarted","Data":"970a95b7943ff4e2dc8c19a3bee1aedd4a2f3157d23140bac9eef67a33476fdf"} Feb 03 12:28:55 crc kubenswrapper[4820]: I0203 12:28:55.565248 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-97lpw" event={"ID":"fea163e7-ea8b-4888-8634-18323a2dfc2d","Type":"ContainerStarted","Data":"6fe5b7f0a2310a30738bfb5c2b610a72a585291d5dfce440e596fd4d2d057993"} Feb 03 12:28:55 crc kubenswrapper[4820]: I0203 12:28:55.571546 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-119e-account-create-update-gctg8" podStartSLOduration=9.571518553 podStartE2EDuration="9.571518553s" podCreationTimestamp="2026-02-03 12:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:28:55.561219275 +0000 UTC m=+1453.084295149" watchObservedRunningTime="2026-02-03 12:28:55.571518553 +0000 UTC m=+1453.094594417" Feb 03 12:28:55 crc kubenswrapper[4820]: I0203 12:28:55.591052 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-shb82" podStartSLOduration=9.59101855 podStartE2EDuration="9.59101855s" podCreationTimestamp="2026-02-03 12:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:28:55.576414126 +0000 UTC m=+1453.099490000" watchObservedRunningTime="2026-02-03 12:28:55.59101855 +0000 UTC m=+1453.114094414" Feb 03 12:28:55 crc kubenswrapper[4820]: I0203 12:28:55.976678 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-ec37-account-create-update-9qk8m" podStartSLOduration=9.976657464 podStartE2EDuration="9.976657464s" podCreationTimestamp="2026-02-03 12:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:28:55.975419341 +0000 UTC m=+1453.498495225" watchObservedRunningTime="2026-02-03 12:28:55.976657464 +0000 UTC m=+1453.499733328" Feb 03 12:28:56 crc kubenswrapper[4820]: I0203 12:28:56.004954 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-97lpw" podStartSLOduration=10.004928567 podStartE2EDuration="10.004928567s" podCreationTimestamp="2026-02-03 12:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:28:55.994474475 +0000 UTC m=+1453.517550339" watchObservedRunningTime="2026-02-03 12:28:56.004928567 +0000 UTC m=+1453.528004431" Feb 03 12:28:56 crc kubenswrapper[4820]: I0203 12:28:56.772742 4820 generic.go:334] "Generic (PLEG): container finished" podID="fea163e7-ea8b-4888-8634-18323a2dfc2d" containerID="6fe5b7f0a2310a30738bfb5c2b610a72a585291d5dfce440e596fd4d2d057993" exitCode=0 Feb 03 12:28:56 crc kubenswrapper[4820]: I0203 12:28:56.772840 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-97lpw" event={"ID":"fea163e7-ea8b-4888-8634-18323a2dfc2d","Type":"ContainerDied","Data":"6fe5b7f0a2310a30738bfb5c2b610a72a585291d5dfce440e596fd4d2d057993"} Feb 03 12:28:56 crc kubenswrapper[4820]: I0203 12:28:56.787494 4820 generic.go:334] "Generic (PLEG): container finished" podID="ab7fb74b-aa61-420d-b013-f663b159cf8b" containerID="08da51c59a4a0840a2624deec089ea2c857bcd5d44291be7cd6ad4f51bc9054c" exitCode=0 Feb 03 12:28:56 crc kubenswrapper[4820]: I0203 12:28:56.787618 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-119e-account-create-update-gctg8" event={"ID":"ab7fb74b-aa61-420d-b013-f663b159cf8b","Type":"ContainerDied","Data":"08da51c59a4a0840a2624deec089ea2c857bcd5d44291be7cd6ad4f51bc9054c"} Feb 03 12:28:56 crc kubenswrapper[4820]: I0203 12:28:56.791727 4820 generic.go:334] "Generic (PLEG): container finished" podID="343cdd64-3829-4d0b-bbac-d220e5442ee0" containerID="8674220873c0d60a64cf2e9ce9f44eb5bf1ab3d9ea0043e909c64e75272b07cc" exitCode=0 Feb 03 12:28:56 crc kubenswrapper[4820]: I0203 12:28:56.791793 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec37-account-create-update-9qk8m" event={"ID":"343cdd64-3829-4d0b-bbac-d220e5442ee0","Type":"ContainerDied","Data":"8674220873c0d60a64cf2e9ce9f44eb5bf1ab3d9ea0043e909c64e75272b07cc"} Feb 03 12:28:56 crc kubenswrapper[4820]: I0203 12:28:56.798959 4820 generic.go:334] "Generic (PLEG): container finished" podID="32fc4e30-d6f9-431f-a147-b54659c292f4" containerID="93ead090a8653d9ab82ded32d8eb3d77fab8ab0ee9ffa5746d05c40c40fe3593" exitCode=0 Feb 03 12:28:56 crc kubenswrapper[4820]: I0203 12:28:56.799359 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-shb82" event={"ID":"32fc4e30-d6f9-431f-a147-b54659c292f4","Type":"ContainerDied","Data":"93ead090a8653d9ab82ded32d8eb3d77fab8ab0ee9ffa5746d05c40c40fe3593"} Feb 03 12:28:56 crc kubenswrapper[4820]: I0203 12:28:56.833210 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.414456 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f272940-99d0-44a5-b16c-73b2b4f17bba" path="/var/lib/kubelet/pods/4f272940-99d0-44a5-b16c-73b2b4f17bba/volumes" Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.521056 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ptbp8" Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.587790 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb7br\" (UniqueName: \"kubernetes.io/projected/091fcf4f-9f71-4b4a-92ef-b856a2df672a-kube-api-access-qb7br\") pod \"091fcf4f-9f71-4b4a-92ef-b856a2df672a\" (UID: \"091fcf4f-9f71-4b4a-92ef-b856a2df672a\") " Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.587843 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/091fcf4f-9f71-4b4a-92ef-b856a2df672a-operator-scripts\") pod \"091fcf4f-9f71-4b4a-92ef-b856a2df672a\" (UID: \"091fcf4f-9f71-4b4a-92ef-b856a2df672a\") " Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.588629 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/091fcf4f-9f71-4b4a-92ef-b856a2df672a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "091fcf4f-9f71-4b4a-92ef-b856a2df672a" (UID: "091fcf4f-9f71-4b4a-92ef-b856a2df672a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.596533 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/091fcf4f-9f71-4b4a-92ef-b856a2df672a-kube-api-access-qb7br" (OuterVolumeSpecName: "kube-api-access-qb7br") pod "091fcf4f-9f71-4b4a-92ef-b856a2df672a" (UID: "091fcf4f-9f71-4b4a-92ef-b856a2df672a"). InnerVolumeSpecName "kube-api-access-qb7br". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.690456 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.690597 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb7br\" (UniqueName: \"kubernetes.io/projected/091fcf4f-9f71-4b4a-92ef-b856a2df672a-kube-api-access-qb7br\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.690612 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/091fcf4f-9f71-4b4a-92ef-b856a2df672a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:57 crc kubenswrapper[4820]: E0203 12:28:57.690755 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 12:28:57 crc kubenswrapper[4820]: E0203 12:28:57.690782 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 12:28:57 crc kubenswrapper[4820]: E0203 12:28:57.690855 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift podName:d4eb10ed-a945-4b23-8fb3-62022a90e09f nodeName:}" failed. No retries permitted until 2026-02-03 12:29:13.690838514 +0000 UTC m=+1471.213914368 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift") pod "swift-storage-0" (UID: "d4eb10ed-a945-4b23-8fb3-62022a90e09f") : configmap "swift-ring-files" not found Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.811043 4820 generic.go:334] "Generic (PLEG): container finished" podID="18ae976d-57fb-4c6e-8f3d-af9748d3058a" containerID="c10ede440eb95f68092eed228fe7a5cbe1cfc99cc437c5af3a0964e3f2b6c398" exitCode=0 Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.811141 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18ae976d-57fb-4c6e-8f3d-af9748d3058a","Type":"ContainerDied","Data":"c10ede440eb95f68092eed228fe7a5cbe1cfc99cc437c5af3a0964e3f2b6c398"} Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.817902 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ptbp8" event={"ID":"091fcf4f-9f71-4b4a-92ef-b856a2df672a","Type":"ContainerDied","Data":"79daef99490811fff227bd2bed2848dff10635eb9f085652174420a0dab47d13"} Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.817950 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79daef99490811fff227bd2bed2848dff10635eb9f085652174420a0dab47d13" Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.818044 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ptbp8" Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.821087 4820 generic.go:334] "Generic (PLEG): container finished" podID="62eb6ec6-669b-476d-929f-919b7f533a5a" containerID="16baf9e9f5f87ab6f2f078df976f2bf016e5daf6fdba848fbad1d934eb79f9e2" exitCode=0 Feb 03 12:28:57 crc kubenswrapper[4820]: I0203 12:28:57.821351 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"62eb6ec6-669b-476d-929f-919b7f533a5a","Type":"ContainerDied","Data":"16baf9e9f5f87ab6f2f078df976f2bf016e5daf6fdba848fbad1d934eb79f9e2"} Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.228653 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec37-account-create-update-9qk8m" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.408195 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp5jd\" (UniqueName: \"kubernetes.io/projected/343cdd64-3829-4d0b-bbac-d220e5442ee0-kube-api-access-pp5jd\") pod \"343cdd64-3829-4d0b-bbac-d220e5442ee0\" (UID: \"343cdd64-3829-4d0b-bbac-d220e5442ee0\") " Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.409151 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343cdd64-3829-4d0b-bbac-d220e5442ee0-operator-scripts\") pod \"343cdd64-3829-4d0b-bbac-d220e5442ee0\" (UID: \"343cdd64-3829-4d0b-bbac-d220e5442ee0\") " Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.410319 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/343cdd64-3829-4d0b-bbac-d220e5442ee0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "343cdd64-3829-4d0b-bbac-d220e5442ee0" (UID: "343cdd64-3829-4d0b-bbac-d220e5442ee0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.425697 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/343cdd64-3829-4d0b-bbac-d220e5442ee0-kube-api-access-pp5jd" (OuterVolumeSpecName: "kube-api-access-pp5jd") pod "343cdd64-3829-4d0b-bbac-d220e5442ee0" (UID: "343cdd64-3829-4d0b-bbac-d220e5442ee0"). InnerVolumeSpecName "kube-api-access-pp5jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.512987 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp5jd\" (UniqueName: \"kubernetes.io/projected/343cdd64-3829-4d0b-bbac-d220e5442ee0-kube-api-access-pp5jd\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.513035 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343cdd64-3829-4d0b-bbac-d220e5442ee0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.538513 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-119e-account-create-update-gctg8" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.661752 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-97lpw" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.680366 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-shb82" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.716484 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcnbt\" (UniqueName: \"kubernetes.io/projected/ab7fb74b-aa61-420d-b013-f663b159cf8b-kube-api-access-rcnbt\") pod \"ab7fb74b-aa61-420d-b013-f663b159cf8b\" (UID: \"ab7fb74b-aa61-420d-b013-f663b159cf8b\") " Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.716797 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7fb74b-aa61-420d-b013-f663b159cf8b-operator-scripts\") pod \"ab7fb74b-aa61-420d-b013-f663b159cf8b\" (UID: \"ab7fb74b-aa61-420d-b013-f663b159cf8b\") " Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.717931 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab7fb74b-aa61-420d-b013-f663b159cf8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab7fb74b-aa61-420d-b013-f663b159cf8b" (UID: "ab7fb74b-aa61-420d-b013-f663b159cf8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.726228 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7fb74b-aa61-420d-b013-f663b159cf8b-kube-api-access-rcnbt" (OuterVolumeSpecName: "kube-api-access-rcnbt") pod "ab7fb74b-aa61-420d-b013-f663b159cf8b" (UID: "ab7fb74b-aa61-420d-b013-f663b159cf8b"). InnerVolumeSpecName "kube-api-access-rcnbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.818567 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea163e7-ea8b-4888-8634-18323a2dfc2d-operator-scripts\") pod \"fea163e7-ea8b-4888-8634-18323a2dfc2d\" (UID: \"fea163e7-ea8b-4888-8634-18323a2dfc2d\") " Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.818675 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32fc4e30-d6f9-431f-a147-b54659c292f4-operator-scripts\") pod \"32fc4e30-d6f9-431f-a147-b54659c292f4\" (UID: \"32fc4e30-d6f9-431f-a147-b54659c292f4\") " Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.818796 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpkqd\" (UniqueName: \"kubernetes.io/projected/fea163e7-ea8b-4888-8634-18323a2dfc2d-kube-api-access-cpkqd\") pod \"fea163e7-ea8b-4888-8634-18323a2dfc2d\" (UID: \"fea163e7-ea8b-4888-8634-18323a2dfc2d\") " Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.819301 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32fc4e30-d6f9-431f-a147-b54659c292f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32fc4e30-d6f9-431f-a147-b54659c292f4" (UID: "32fc4e30-d6f9-431f-a147-b54659c292f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.819338 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fea163e7-ea8b-4888-8634-18323a2dfc2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fea163e7-ea8b-4888-8634-18323a2dfc2d" (UID: "fea163e7-ea8b-4888-8634-18323a2dfc2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.819509 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk8sj\" (UniqueName: \"kubernetes.io/projected/32fc4e30-d6f9-431f-a147-b54659c292f4-kube-api-access-kk8sj\") pod \"32fc4e30-d6f9-431f-a147-b54659c292f4\" (UID: \"32fc4e30-d6f9-431f-a147-b54659c292f4\") " Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.820377 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea163e7-ea8b-4888-8634-18323a2dfc2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.820402 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32fc4e30-d6f9-431f-a147-b54659c292f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.820416 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcnbt\" (UniqueName: \"kubernetes.io/projected/ab7fb74b-aa61-420d-b013-f663b159cf8b-kube-api-access-rcnbt\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.820433 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7fb74b-aa61-420d-b013-f663b159cf8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.823696 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32fc4e30-d6f9-431f-a147-b54659c292f4-kube-api-access-kk8sj" (OuterVolumeSpecName: "kube-api-access-kk8sj") pod "32fc4e30-d6f9-431f-a147-b54659c292f4" (UID: "32fc4e30-d6f9-431f-a147-b54659c292f4"). InnerVolumeSpecName "kube-api-access-kk8sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.823817 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea163e7-ea8b-4888-8634-18323a2dfc2d-kube-api-access-cpkqd" (OuterVolumeSpecName: "kube-api-access-cpkqd") pod "fea163e7-ea8b-4888-8634-18323a2dfc2d" (UID: "fea163e7-ea8b-4888-8634-18323a2dfc2d"). InnerVolumeSpecName "kube-api-access-cpkqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.834680 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"62eb6ec6-669b-476d-929f-919b7f533a5a","Type":"ContainerStarted","Data":"2f33cc05658334d8533fe376a75f11b566384089517933d61870c37049109c62"} Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.835083 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.840467 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-119e-account-create-update-gctg8" event={"ID":"ab7fb74b-aa61-420d-b013-f663b159cf8b","Type":"ContainerDied","Data":"f3b97011600bc8093340f66a3e94591dbe76e043a3226995eb789b0184ba1ef8"} Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.840494 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-119e-account-create-update-gctg8" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.840511 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b97011600bc8093340f66a3e94591dbe76e043a3226995eb789b0184ba1ef8" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.842838 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18ae976d-57fb-4c6e-8f3d-af9748d3058a","Type":"ContainerStarted","Data":"3c539d00fce621b18344b74f0c49d894626d0364db7118e68c2f1ca3ce327a39"} Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.843168 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.845630 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec37-account-create-update-9qk8m" event={"ID":"343cdd64-3829-4d0b-bbac-d220e5442ee0","Type":"ContainerDied","Data":"96b2c444d0b9be21d1627fb563cf381c65cef3408d5f807e714b7235927d61cb"} Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.845675 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96b2c444d0b9be21d1627fb563cf381c65cef3408d5f807e714b7235927d61cb" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.845741 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec37-account-create-update-9qk8m" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.851977 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-shb82" event={"ID":"32fc4e30-d6f9-431f-a147-b54659c292f4","Type":"ContainerDied","Data":"4384d22feb0517b8b43ee79a700c8a4bf5e5dce0ba6c6bfea75db80b9147a27d"} Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.852032 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4384d22feb0517b8b43ee79a700c8a4bf5e5dce0ba6c6bfea75db80b9147a27d" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.852109 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-shb82" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.857732 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-97lpw" event={"ID":"fea163e7-ea8b-4888-8634-18323a2dfc2d","Type":"ContainerDied","Data":"18b8e6fe41ff8a2e4c1406aa08364d07d893c3b149f7f6ab70c7a247ea61ff27"} Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.857771 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18b8e6fe41ff8a2e4c1406aa08364d07d893c3b149f7f6ab70c7a247ea61ff27" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.857832 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-97lpw" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.899723 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.91421366 podStartE2EDuration="1m24.899700838s" podCreationTimestamp="2026-02-03 12:27:34 +0000 UTC" firstStartedPulling="2026-02-03 12:27:37.55568145 +0000 UTC m=+1375.078757314" lastFinishedPulling="2026-02-03 12:28:22.541168638 +0000 UTC m=+1420.064244492" observedRunningTime="2026-02-03 12:28:58.874160458 +0000 UTC m=+1456.397236342" watchObservedRunningTime="2026-02-03 12:28:58.899700838 +0000 UTC m=+1456.422776702" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.923397 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpkqd\" (UniqueName: \"kubernetes.io/projected/fea163e7-ea8b-4888-8634-18323a2dfc2d-kube-api-access-cpkqd\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.923440 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk8sj\" (UniqueName: \"kubernetes.io/projected/32fc4e30-d6f9-431f-a147-b54659c292f4-kube-api-access-kk8sj\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:58 crc kubenswrapper[4820]: I0203 12:28:58.944010 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.384562108 podStartE2EDuration="1m26.943985124s" podCreationTimestamp="2026-02-03 12:27:32 +0000 UTC" firstStartedPulling="2026-02-03 12:27:35.243310766 +0000 UTC m=+1372.766386640" lastFinishedPulling="2026-02-03 12:28:20.802733792 +0000 UTC m=+1418.325809656" observedRunningTime="2026-02-03 12:28:58.926489771 +0000 UTC m=+1456.449565655" watchObservedRunningTime="2026-02-03 12:28:58.943985124 +0000 UTC m=+1456.467060978" Feb 03 12:29:00 crc kubenswrapper[4820]: I0203 12:29:00.271217 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ptbp8"] Feb 03 12:29:00 crc kubenswrapper[4820]: I0203 12:29:00.280035 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ptbp8"] Feb 03 12:29:01 crc kubenswrapper[4820]: I0203 12:29:01.159249 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="091fcf4f-9f71-4b4a-92ef-b856a2df672a" path="/var/lib/kubelet/pods/091fcf4f-9f71-4b4a-92ef-b856a2df672a/volumes" Feb 03 12:29:01 crc kubenswrapper[4820]: I0203 12:29:01.596733 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kk5zn" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.135810 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-96p5d-config-j62mf"] Feb 03 12:29:02 crc kubenswrapper[4820]: E0203 12:29:02.136343 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f272940-99d0-44a5-b16c-73b2b4f17bba" containerName="dnsmasq-dns" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.136360 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f272940-99d0-44a5-b16c-73b2b4f17bba" containerName="dnsmasq-dns" Feb 03 12:29:02 crc kubenswrapper[4820]: E0203 12:29:02.136376 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fea163e7-ea8b-4888-8634-18323a2dfc2d" containerName="mariadb-database-create" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.136384 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea163e7-ea8b-4888-8634-18323a2dfc2d" containerName="mariadb-database-create" Feb 03 12:29:02 crc kubenswrapper[4820]: E0203 12:29:02.136402 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32fc4e30-d6f9-431f-a147-b54659c292f4" containerName="mariadb-database-create" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.136413 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="32fc4e30-d6f9-431f-a147-b54659c292f4" containerName="mariadb-database-create" Feb 03 12:29:02 crc kubenswrapper[4820]: E0203 12:29:02.136426 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f272940-99d0-44a5-b16c-73b2b4f17bba" containerName="init" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.136433 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f272940-99d0-44a5-b16c-73b2b4f17bba" containerName="init" Feb 03 12:29:02 crc kubenswrapper[4820]: E0203 12:29:02.136466 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7fb74b-aa61-420d-b013-f663b159cf8b" containerName="mariadb-account-create-update" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.136476 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7fb74b-aa61-420d-b013-f663b159cf8b" containerName="mariadb-account-create-update" Feb 03 12:29:02 crc kubenswrapper[4820]: E0203 12:29:02.136493 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343cdd64-3829-4d0b-bbac-d220e5442ee0" containerName="mariadb-account-create-update" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.136502 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="343cdd64-3829-4d0b-bbac-d220e5442ee0" containerName="mariadb-account-create-update" Feb 03 12:29:02 crc kubenswrapper[4820]: E0203 12:29:02.136515 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="091fcf4f-9f71-4b4a-92ef-b856a2df672a" containerName="mariadb-account-create-update" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.136523 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="091fcf4f-9f71-4b4a-92ef-b856a2df672a" containerName="mariadb-account-create-update" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.136749 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="091fcf4f-9f71-4b4a-92ef-b856a2df672a" containerName="mariadb-account-create-update" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.136766 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f272940-99d0-44a5-b16c-73b2b4f17bba" containerName="dnsmasq-dns" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.136811 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="32fc4e30-d6f9-431f-a147-b54659c292f4" containerName="mariadb-database-create" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.136826 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fea163e7-ea8b-4888-8634-18323a2dfc2d" containerName="mariadb-database-create" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.136838 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7fb74b-aa61-420d-b013-f663b159cf8b" containerName="mariadb-account-create-update" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.136851 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="343cdd64-3829-4d0b-bbac-d220e5442ee0" containerName="mariadb-account-create-update" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.137653 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.146239 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.214077 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96p5d-config-j62mf"] Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.317200 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-scripts\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.317251 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-run\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.317297 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-log-ovn\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.317325 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-run-ovn\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.317411 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dq84\" (UniqueName: \"kubernetes.io/projected/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-kube-api-access-7dq84\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.317428 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-additional-scripts\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.419766 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-scripts\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.420147 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-run\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.420200 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-log-ovn\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.420237 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-run-ovn\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.420334 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dq84\" (UniqueName: \"kubernetes.io/projected/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-kube-api-access-7dq84\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.420359 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-additional-scripts\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.420672 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-run\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.420685 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-log-ovn\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.420800 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-run-ovn\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.421723 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-additional-scripts\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.423593 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-scripts\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.462767 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dq84\" (UniqueName: \"kubernetes.io/projected/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-kube-api-access-7dq84\") pod \"ovn-controller-96p5d-config-j62mf\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:02 crc kubenswrapper[4820]: I0203 12:29:02.759788 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:03 crc kubenswrapper[4820]: I0203 12:29:03.290471 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96p5d-config-j62mf"] Feb 03 12:29:03 crc kubenswrapper[4820]: I0203 12:29:03.917794 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96p5d-config-j62mf" event={"ID":"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2","Type":"ContainerStarted","Data":"9ddb0db6be8f029bdec295dfbfd0f3ab899c22be7083edb646ad9e31ab3eb30d"} Feb 03 12:29:03 crc kubenswrapper[4820]: I0203 12:29:03.918183 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96p5d-config-j62mf" event={"ID":"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2","Type":"ContainerStarted","Data":"752b5b1003ee9d5a60d331d8265ab05341ae1f13b9542a4620164c9c17f6a557"} Feb 03 12:29:03 crc kubenswrapper[4820]: I0203 12:29:03.938534 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-96p5d-config-j62mf" podStartSLOduration=1.9385153960000001 podStartE2EDuration="1.938515396s" podCreationTimestamp="2026-02-03 12:29:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:29:03.936462061 +0000 UTC m=+1461.459537965" watchObservedRunningTime="2026-02-03 12:29:03.938515396 +0000 UTC m=+1461.461591260" Feb 03 12:29:04 crc kubenswrapper[4820]: I0203 12:29:04.929353 4820 generic.go:334] "Generic (PLEG): container finished" podID="94423319-f57f-47dd-80db-db41374dcb25" containerID="999c06d20e5fa8c1348b8eda49de76849a801f9e4d5b38b29fc594b89b2c9015" exitCode=0 Feb 03 12:29:04 crc kubenswrapper[4820]: I0203 12:29:04.929472 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pslmr" event={"ID":"94423319-f57f-47dd-80db-db41374dcb25","Type":"ContainerDied","Data":"999c06d20e5fa8c1348b8eda49de76849a801f9e4d5b38b29fc594b89b2c9015"} Feb 03 12:29:04 crc kubenswrapper[4820]: I0203 12:29:04.932488 4820 generic.go:334] "Generic (PLEG): container finished" podID="2dbb291d-04ff-4ac0-a811-dc26bd62c6f2" containerID="9ddb0db6be8f029bdec295dfbfd0f3ab899c22be7083edb646ad9e31ab3eb30d" exitCode=0 Feb 03 12:29:04 crc kubenswrapper[4820]: I0203 12:29:04.932540 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96p5d-config-j62mf" event={"ID":"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2","Type":"ContainerDied","Data":"9ddb0db6be8f029bdec295dfbfd0f3ab899c22be7083edb646ad9e31ab3eb30d"} Feb 03 12:29:05 crc kubenswrapper[4820]: I0203 12:29:05.262466 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rg9bk"] Feb 03 12:29:05 crc kubenswrapper[4820]: I0203 12:29:05.265354 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rg9bk" Feb 03 12:29:05 crc kubenswrapper[4820]: I0203 12:29:05.267765 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 03 12:29:05 crc kubenswrapper[4820]: I0203 12:29:05.290608 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rg9bk"] Feb 03 12:29:05 crc kubenswrapper[4820]: I0203 12:29:05.394373 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc72m\" (UniqueName: \"kubernetes.io/projected/8ebed9e0-c26c-435a-b024-b9e768922743-kube-api-access-wc72m\") pod \"root-account-create-update-rg9bk\" (UID: \"8ebed9e0-c26c-435a-b024-b9e768922743\") " pod="openstack/root-account-create-update-rg9bk" Feb 03 12:29:05 crc kubenswrapper[4820]: I0203 12:29:05.394448 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebed9e0-c26c-435a-b024-b9e768922743-operator-scripts\") pod \"root-account-create-update-rg9bk\" (UID: \"8ebed9e0-c26c-435a-b024-b9e768922743\") " pod="openstack/root-account-create-update-rg9bk" Feb 03 12:29:05 crc kubenswrapper[4820]: I0203 12:29:05.496475 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc72m\" (UniqueName: \"kubernetes.io/projected/8ebed9e0-c26c-435a-b024-b9e768922743-kube-api-access-wc72m\") pod \"root-account-create-update-rg9bk\" (UID: \"8ebed9e0-c26c-435a-b024-b9e768922743\") " pod="openstack/root-account-create-update-rg9bk" Feb 03 12:29:05 crc kubenswrapper[4820]: I0203 12:29:05.496593 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebed9e0-c26c-435a-b024-b9e768922743-operator-scripts\") pod \"root-account-create-update-rg9bk\" (UID: \"8ebed9e0-c26c-435a-b024-b9e768922743\") " pod="openstack/root-account-create-update-rg9bk" Feb 03 12:29:05 crc kubenswrapper[4820]: I0203 12:29:05.497565 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebed9e0-c26c-435a-b024-b9e768922743-operator-scripts\") pod \"root-account-create-update-rg9bk\" (UID: \"8ebed9e0-c26c-435a-b024-b9e768922743\") " pod="openstack/root-account-create-update-rg9bk" Feb 03 12:29:05 crc kubenswrapper[4820]: I0203 12:29:05.519156 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc72m\" (UniqueName: \"kubernetes.io/projected/8ebed9e0-c26c-435a-b024-b9e768922743-kube-api-access-wc72m\") pod \"root-account-create-update-rg9bk\" (UID: \"8ebed9e0-c26c-435a-b024-b9e768922743\") " pod="openstack/root-account-create-update-rg9bk" Feb 03 12:29:05 crc kubenswrapper[4820]: I0203 12:29:05.584259 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rg9bk" Feb 03 12:29:06 crc kubenswrapper[4820]: I0203 12:29:06.422262 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-96p5d" Feb 03 12:29:07 crc kubenswrapper[4820]: I0203 12:29:07.962645 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b4c739c-d87f-478c-aec7-07a49da53d46","Type":"ContainerStarted","Data":"b19b83985ad7a3072169606b1626b31ec7359eade993fb7b14ca4cd26efba095"} Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.343143 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.360335 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.523228 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-swiftconf\") pod \"94423319-f57f-47dd-80db-db41374dcb25\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.523592 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-scripts\") pod \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.523671 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/94423319-f57f-47dd-80db-db41374dcb25-ring-data-devices\") pod \"94423319-f57f-47dd-80db-db41374dcb25\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.523697 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/94423319-f57f-47dd-80db-db41374dcb25-etc-swift\") pod \"94423319-f57f-47dd-80db-db41374dcb25\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.523762 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-combined-ca-bundle\") pod \"94423319-f57f-47dd-80db-db41374dcb25\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.523792 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-run-ovn\") pod \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.523836 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-log-ovn\") pod \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.523866 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-additional-scripts\") pod \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.523908 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-dispersionconf\") pod \"94423319-f57f-47dd-80db-db41374dcb25\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.523930 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94423319-f57f-47dd-80db-db41374dcb25-scripts\") pod \"94423319-f57f-47dd-80db-db41374dcb25\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.523985 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dq84\" (UniqueName: \"kubernetes.io/projected/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-kube-api-access-7dq84\") pod \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.524021 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj4qf\" (UniqueName: \"kubernetes.io/projected/94423319-f57f-47dd-80db-db41374dcb25-kube-api-access-lj4qf\") pod \"94423319-f57f-47dd-80db-db41374dcb25\" (UID: \"94423319-f57f-47dd-80db-db41374dcb25\") " Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.524068 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-run\") pod \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\" (UID: \"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2\") " Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.524314 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2dbb291d-04ff-4ac0-a811-dc26bd62c6f2" (UID: "2dbb291d-04ff-4ac0-a811-dc26bd62c6f2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.524414 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-run" (OuterVolumeSpecName: "var-run") pod "2dbb291d-04ff-4ac0-a811-dc26bd62c6f2" (UID: "2dbb291d-04ff-4ac0-a811-dc26bd62c6f2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.524439 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2dbb291d-04ff-4ac0-a811-dc26bd62c6f2" (UID: "2dbb291d-04ff-4ac0-a811-dc26bd62c6f2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.524968 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94423319-f57f-47dd-80db-db41374dcb25-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "94423319-f57f-47dd-80db-db41374dcb25" (UID: "94423319-f57f-47dd-80db-db41374dcb25"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.525528 4820 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-run\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.525569 4820 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/94423319-f57f-47dd-80db-db41374dcb25-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.525592 4820 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.525604 4820 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.530008 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2dbb291d-04ff-4ac0-a811-dc26bd62c6f2" (UID: "2dbb291d-04ff-4ac0-a811-dc26bd62c6f2"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.530348 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-scripts" (OuterVolumeSpecName: "scripts") pod "2dbb291d-04ff-4ac0-a811-dc26bd62c6f2" (UID: "2dbb291d-04ff-4ac0-a811-dc26bd62c6f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.530401 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94423319-f57f-47dd-80db-db41374dcb25-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "94423319-f57f-47dd-80db-db41374dcb25" (UID: "94423319-f57f-47dd-80db-db41374dcb25"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.532924 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94423319-f57f-47dd-80db-db41374dcb25-kube-api-access-lj4qf" (OuterVolumeSpecName: "kube-api-access-lj4qf") pod "94423319-f57f-47dd-80db-db41374dcb25" (UID: "94423319-f57f-47dd-80db-db41374dcb25"). InnerVolumeSpecName "kube-api-access-lj4qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.536049 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-kube-api-access-7dq84" (OuterVolumeSpecName: "kube-api-access-7dq84") pod "2dbb291d-04ff-4ac0-a811-dc26bd62c6f2" (UID: "2dbb291d-04ff-4ac0-a811-dc26bd62c6f2"). InnerVolumeSpecName "kube-api-access-7dq84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.537590 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "94423319-f57f-47dd-80db-db41374dcb25" (UID: "94423319-f57f-47dd-80db-db41374dcb25"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.554284 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94423319-f57f-47dd-80db-db41374dcb25-scripts" (OuterVolumeSpecName: "scripts") pod "94423319-f57f-47dd-80db-db41374dcb25" (UID: "94423319-f57f-47dd-80db-db41374dcb25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.562395 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "94423319-f57f-47dd-80db-db41374dcb25" (UID: "94423319-f57f-47dd-80db-db41374dcb25"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.569010 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94423319-f57f-47dd-80db-db41374dcb25" (UID: "94423319-f57f-47dd-80db-db41374dcb25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.627623 4820 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.627653 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94423319-f57f-47dd-80db-db41374dcb25-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.627665 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dq84\" (UniqueName: \"kubernetes.io/projected/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-kube-api-access-7dq84\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.627677 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj4qf\" (UniqueName: \"kubernetes.io/projected/94423319-f57f-47dd-80db-db41374dcb25-kube-api-access-lj4qf\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.627685 4820 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.627695 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.627704 4820 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/94423319-f57f-47dd-80db-db41374dcb25-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.627712 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94423319-f57f-47dd-80db-db41374dcb25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.627722 4820 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:11 crc kubenswrapper[4820]: I0203 12:29:11.715072 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rg9bk"] Feb 03 12:29:11 crc kubenswrapper[4820]: W0203 12:29:11.733516 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ebed9e0_c26c_435a_b024_b9e768922743.slice/crio-c7231266a615e55b0e00fbb39e6361e72f519e6ee823ce2deeeecda2571e2271 WatchSource:0}: Error finding container c7231266a615e55b0e00fbb39e6361e72f519e6ee823ce2deeeecda2571e2271: Status 404 returned error can't find the container with id c7231266a615e55b0e00fbb39e6361e72f519e6ee823ce2deeeecda2571e2271 Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.022235 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zlksq" event={"ID":"b897af0d-2b67-45c6-b17f-3686d5a419c0","Type":"ContainerStarted","Data":"d3fee94a4fab8c8fed28cce3e70b696c0512dd6b3cb9216afa62e9ed717bb306"} Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.026784 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pslmr" event={"ID":"94423319-f57f-47dd-80db-db41374dcb25","Type":"ContainerDied","Data":"7997fd2c5eeed02bd5d71fb5f47d4345b705ee009ae143e78e599dc69db46a5a"} Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.027135 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7997fd2c5eeed02bd5d71fb5f47d4345b705ee009ae143e78e599dc69db46a5a" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.026925 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pslmr" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.030054 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96p5d-config-j62mf" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.030088 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96p5d-config-j62mf" event={"ID":"2dbb291d-04ff-4ac0-a811-dc26bd62c6f2","Type":"ContainerDied","Data":"752b5b1003ee9d5a60d331d8265ab05341ae1f13b9542a4620164c9c17f6a557"} Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.030187 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="752b5b1003ee9d5a60d331d8265ab05341ae1f13b9542a4620164c9c17f6a557" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.035754 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rg9bk" event={"ID":"8ebed9e0-c26c-435a-b024-b9e768922743","Type":"ContainerStarted","Data":"4c18f9f3d5bde2a0f49e729b1cdca1a0258ceca11a7d7acf5f0d3b2546c20880"} Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.036022 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rg9bk" event={"ID":"8ebed9e0-c26c-435a-b024-b9e768922743","Type":"ContainerStarted","Data":"c7231266a615e55b0e00fbb39e6361e72f519e6ee823ce2deeeecda2571e2271"} Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.040825 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-zlksq" podStartSLOduration=3.026522421 podStartE2EDuration="19.04079685s" podCreationTimestamp="2026-02-03 12:28:53 +0000 UTC" firstStartedPulling="2026-02-03 12:28:55.331072151 +0000 UTC m=+1452.854148015" lastFinishedPulling="2026-02-03 12:29:11.34534658 +0000 UTC m=+1468.868422444" observedRunningTime="2026-02-03 12:29:12.036981257 +0000 UTC m=+1469.560057131" watchObservedRunningTime="2026-02-03 12:29:12.04079685 +0000 UTC m=+1469.563873054" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.060497 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-rg9bk" podStartSLOduration=7.060477362 podStartE2EDuration="7.060477362s" podCreationTimestamp="2026-02-03 12:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:29:12.059435104 +0000 UTC m=+1469.582510968" watchObservedRunningTime="2026-02-03 12:29:12.060477362 +0000 UTC m=+1469.583553226" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.448940 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-96p5d-config-j62mf"] Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.459701 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-96p5d-config-j62mf"] Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.616014 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-96p5d-config-lj4n2"] Feb 03 12:29:12 crc kubenswrapper[4820]: E0203 12:29:12.616485 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbb291d-04ff-4ac0-a811-dc26bd62c6f2" containerName="ovn-config" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.616507 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbb291d-04ff-4ac0-a811-dc26bd62c6f2" containerName="ovn-config" Feb 03 12:29:12 crc kubenswrapper[4820]: E0203 12:29:12.616535 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94423319-f57f-47dd-80db-db41374dcb25" containerName="swift-ring-rebalance" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.616544 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="94423319-f57f-47dd-80db-db41374dcb25" containerName="swift-ring-rebalance" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.616774 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbb291d-04ff-4ac0-a811-dc26bd62c6f2" containerName="ovn-config" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.616806 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="94423319-f57f-47dd-80db-db41374dcb25" containerName="swift-ring-rebalance" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.617587 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.621298 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.636470 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96p5d-config-lj4n2"] Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.750762 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9650bfa4-56cb-482b-935b-9234fbbf7a42-scripts\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.750912 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-run\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.750975 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-run-ovn\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.750996 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wth7\" (UniqueName: \"kubernetes.io/projected/9650bfa4-56cb-482b-935b-9234fbbf7a42-kube-api-access-9wth7\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.751175 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9650bfa4-56cb-482b-935b-9234fbbf7a42-additional-scripts\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.751220 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-log-ovn\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.852827 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wth7\" (UniqueName: \"kubernetes.io/projected/9650bfa4-56cb-482b-935b-9234fbbf7a42-kube-api-access-9wth7\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.852978 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9650bfa4-56cb-482b-935b-9234fbbf7a42-additional-scripts\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.853002 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-log-ovn\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.853035 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9650bfa4-56cb-482b-935b-9234fbbf7a42-scripts\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.853093 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-run\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.853146 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-run-ovn\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.853447 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-run\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.853485 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-log-ovn\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.853927 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9650bfa4-56cb-482b-935b-9234fbbf7a42-additional-scripts\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.854700 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-run-ovn\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.855029 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9650bfa4-56cb-482b-935b-9234fbbf7a42-scripts\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.886386 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wth7\" (UniqueName: \"kubernetes.io/projected/9650bfa4-56cb-482b-935b-9234fbbf7a42-kube-api-access-9wth7\") pod \"ovn-controller-96p5d-config-lj4n2\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:12 crc kubenswrapper[4820]: I0203 12:29:12.940100 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:13 crc kubenswrapper[4820]: I0203 12:29:13.045592 4820 generic.go:334] "Generic (PLEG): container finished" podID="8ebed9e0-c26c-435a-b024-b9e768922743" containerID="4c18f9f3d5bde2a0f49e729b1cdca1a0258ceca11a7d7acf5f0d3b2546c20880" exitCode=0 Feb 03 12:29:13 crc kubenswrapper[4820]: I0203 12:29:13.045773 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rg9bk" event={"ID":"8ebed9e0-c26c-435a-b024-b9e768922743","Type":"ContainerDied","Data":"4c18f9f3d5bde2a0f49e729b1cdca1a0258ceca11a7d7acf5f0d3b2546c20880"} Feb 03 12:29:13 crc kubenswrapper[4820]: I0203 12:29:13.156253 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dbb291d-04ff-4ac0-a811-dc26bd62c6f2" path="/var/lib/kubelet/pods/2dbb291d-04ff-4ac0-a811-dc26bd62c6f2/volumes" Feb 03 12:29:13 crc kubenswrapper[4820]: I0203 12:29:13.771643 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:29:13 crc kubenswrapper[4820]: I0203 12:29:13.778428 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4eb10ed-a945-4b23-8fb3-62022a90e09f-etc-swift\") pod \"swift-storage-0\" (UID: \"d4eb10ed-a945-4b23-8fb3-62022a90e09f\") " pod="openstack/swift-storage-0" Feb 03 12:29:13 crc kubenswrapper[4820]: I0203 12:29:13.888865 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-96p5d-config-lj4n2"] Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.012304 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.055170 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96p5d-config-lj4n2" event={"ID":"9650bfa4-56cb-482b-935b-9234fbbf7a42","Type":"ContainerStarted","Data":"9dcbb91d32ca55d66ac24d34aecd7fa799b04b6e064c899a4648da8aba202912"} Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.058621 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b4c739c-d87f-478c-aec7-07a49da53d46","Type":"ContainerStarted","Data":"be65c3a9461153121494e27523a7bd3e433a2fb7373a387862cdee772f4fe4a5"} Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.092431 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=4.115081848 podStartE2EDuration="1m34.092408852s" podCreationTimestamp="2026-02-03 12:27:40 +0000 UTC" firstStartedPulling="2026-02-03 12:27:43.527139641 +0000 UTC m=+1381.050215505" lastFinishedPulling="2026-02-03 12:29:13.504466645 +0000 UTC m=+1471.027542509" observedRunningTime="2026-02-03 12:29:14.086754269 +0000 UTC m=+1471.609830153" watchObservedRunningTime="2026-02-03 12:29:14.092408852 +0000 UTC m=+1471.615484736" Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.128340 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.823571 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zb5gb"] Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.826449 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zb5gb" Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.868253 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zb5gb"] Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.950096 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-lhh84"] Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.951779 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lhh84" Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.958924 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b750b09-8d9b-49f8-bed1-b20fa047bbc4-operator-scripts\") pod \"cinder-db-create-zb5gb\" (UID: \"8b750b09-8d9b-49f8-bed1-b20fa047bbc4\") " pod="openstack/cinder-db-create-zb5gb" Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.958985 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs74x\" (UniqueName: \"kubernetes.io/projected/8b750b09-8d9b-49f8-bed1-b20fa047bbc4-kube-api-access-bs74x\") pod \"cinder-db-create-zb5gb\" (UID: \"8b750b09-8d9b-49f8-bed1-b20fa047bbc4\") " pod="openstack/cinder-db-create-zb5gb" Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.981124 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rg9bk" Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.997900 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-g8wq4"] Feb 03 12:29:14 crc kubenswrapper[4820]: E0203 12:29:14.998390 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebed9e0-c26c-435a-b024-b9e768922743" containerName="mariadb-account-create-update" Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.998411 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebed9e0-c26c-435a-b024-b9e768922743" containerName="mariadb-account-create-update" Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.998650 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ebed9e0-c26c-435a-b024-b9e768922743" containerName="mariadb-account-create-update" Feb 03 12:29:14 crc kubenswrapper[4820]: I0203 12:29:14.999305 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.006356 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.006946 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-nprkm" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.017358 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-lhh84"] Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.282592 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebed9e0-c26c-435a-b024-b9e768922743-operator-scripts\") pod \"8ebed9e0-c26c-435a-b024-b9e768922743\" (UID: \"8ebed9e0-c26c-435a-b024-b9e768922743\") " Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.282777 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc72m\" (UniqueName: \"kubernetes.io/projected/8ebed9e0-c26c-435a-b024-b9e768922743-kube-api-access-wc72m\") pod \"8ebed9e0-c26c-435a-b024-b9e768922743\" (UID: \"8ebed9e0-c26c-435a-b024-b9e768922743\") " Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.283337 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b750b09-8d9b-49f8-bed1-b20fa047bbc4-operator-scripts\") pod \"cinder-db-create-zb5gb\" (UID: \"8b750b09-8d9b-49f8-bed1-b20fa047bbc4\") " pod="openstack/cinder-db-create-zb5gb" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.283389 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-config-data\") pod \"watcher-db-sync-g8wq4\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.283413 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2zpd\" (UniqueName: \"kubernetes.io/projected/b594ebbd-4a60-46ca-92f6-0e4869499849-kube-api-access-r2zpd\") pod \"watcher-db-sync-g8wq4\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.283443 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs74x\" (UniqueName: \"kubernetes.io/projected/8b750b09-8d9b-49f8-bed1-b20fa047bbc4-kube-api-access-bs74x\") pod \"cinder-db-create-zb5gb\" (UID: \"8b750b09-8d9b-49f8-bed1-b20fa047bbc4\") " pod="openstack/cinder-db-create-zb5gb" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.283492 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217d8f0c-123f-42da-b679-dbefeac99a4f-operator-scripts\") pod \"barbican-db-create-lhh84\" (UID: \"217d8f0c-123f-42da-b679-dbefeac99a4f\") " pod="openstack/barbican-db-create-lhh84" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.283550 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-db-sync-config-data\") pod \"watcher-db-sync-g8wq4\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.283621 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-combined-ca-bundle\") pod \"watcher-db-sync-g8wq4\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.283712 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzlk4\" (UniqueName: \"kubernetes.io/projected/217d8f0c-123f-42da-b679-dbefeac99a4f-kube-api-access-hzlk4\") pod \"barbican-db-create-lhh84\" (UID: \"217d8f0c-123f-42da-b679-dbefeac99a4f\") " pod="openstack/barbican-db-create-lhh84" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.286146 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b750b09-8d9b-49f8-bed1-b20fa047bbc4-operator-scripts\") pod \"cinder-db-create-zb5gb\" (UID: \"8b750b09-8d9b-49f8-bed1-b20fa047bbc4\") " pod="openstack/cinder-db-create-zb5gb" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.287805 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebed9e0-c26c-435a-b024-b9e768922743-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ebed9e0-c26c-435a-b024-b9e768922743" (UID: "8ebed9e0-c26c-435a-b024-b9e768922743"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.297310 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ebed9e0-c26c-435a-b024-b9e768922743-kube-api-access-wc72m" (OuterVolumeSpecName: "kube-api-access-wc72m") pod "8ebed9e0-c26c-435a-b024-b9e768922743" (UID: "8ebed9e0-c26c-435a-b024-b9e768922743"). InnerVolumeSpecName "kube-api-access-wc72m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.384975 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs74x\" (UniqueName: \"kubernetes.io/projected/8b750b09-8d9b-49f8-bed1-b20fa047bbc4-kube-api-access-bs74x\") pod \"cinder-db-create-zb5gb\" (UID: \"8b750b09-8d9b-49f8-bed1-b20fa047bbc4\") " pod="openstack/cinder-db-create-zb5gb" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.418473 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-config-data\") pod \"watcher-db-sync-g8wq4\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.418536 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2zpd\" (UniqueName: \"kubernetes.io/projected/b594ebbd-4a60-46ca-92f6-0e4869499849-kube-api-access-r2zpd\") pod \"watcher-db-sync-g8wq4\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.418621 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217d8f0c-123f-42da-b679-dbefeac99a4f-operator-scripts\") pod \"barbican-db-create-lhh84\" (UID: \"217d8f0c-123f-42da-b679-dbefeac99a4f\") " pod="openstack/barbican-db-create-lhh84" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.418693 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-db-sync-config-data\") pod \"watcher-db-sync-g8wq4\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.418752 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-combined-ca-bundle\") pod \"watcher-db-sync-g8wq4\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.421596 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzlk4\" (UniqueName: \"kubernetes.io/projected/217d8f0c-123f-42da-b679-dbefeac99a4f-kube-api-access-hzlk4\") pod \"barbican-db-create-lhh84\" (UID: \"217d8f0c-123f-42da-b679-dbefeac99a4f\") " pod="openstack/barbican-db-create-lhh84" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.421850 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebed9e0-c26c-435a-b024-b9e768922743-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.421873 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc72m\" (UniqueName: \"kubernetes.io/projected/8ebed9e0-c26c-435a-b024-b9e768922743-kube-api-access-wc72m\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.422104 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217d8f0c-123f-42da-b679-dbefeac99a4f-operator-scripts\") pod \"barbican-db-create-lhh84\" (UID: \"217d8f0c-123f-42da-b679-dbefeac99a4f\") " pod="openstack/barbican-db-create-lhh84" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.429419 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rg9bk" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.430105 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96p5d-config-lj4n2" event={"ID":"9650bfa4-56cb-482b-935b-9234fbbf7a42","Type":"ContainerStarted","Data":"eee92b9e627a6f88e77bdbc2740db58043e286daf18a6e2594f5ee9b8e73705f"} Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.430147 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-g8wq4"] Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.430170 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rg9bk" event={"ID":"8ebed9e0-c26c-435a-b024-b9e768922743","Type":"ContainerDied","Data":"c7231266a615e55b0e00fbb39e6361e72f519e6ee823ce2deeeecda2571e2271"} Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.430186 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7231266a615e55b0e00fbb39e6361e72f519e6ee823ce2deeeecda2571e2271" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.430205 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0156-account-create-update-7k4px"] Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.431763 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0156-account-create-update-7k4px"] Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.431866 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0156-account-create-update-7k4px" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.439059 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.447553 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-combined-ca-bundle\") pod \"watcher-db-sync-g8wq4\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.448444 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2zpd\" (UniqueName: \"kubernetes.io/projected/b594ebbd-4a60-46ca-92f6-0e4869499849-kube-api-access-r2zpd\") pod \"watcher-db-sync-g8wq4\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.450287 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-db-sync-config-data\") pod \"watcher-db-sync-g8wq4\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.455235 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-config-data\") pod \"watcher-db-sync-g8wq4\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.460806 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzlk4\" (UniqueName: \"kubernetes.io/projected/217d8f0c-123f-42da-b679-dbefeac99a4f-kube-api-access-hzlk4\") pod \"barbican-db-create-lhh84\" (UID: \"217d8f0c-123f-42da-b679-dbefeac99a4f\") " pod="openstack/barbican-db-create-lhh84" Feb 03 12:29:15 crc kubenswrapper[4820]: W0203 12:29:15.477602 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4eb10ed_a945_4b23_8fb3_62022a90e09f.slice/crio-e67d3c323d8f7cb479f560f9f6ca6f75a3d625ed82a445ce08e80a8f9dec6e65 WatchSource:0}: Error finding container e67d3c323d8f7cb479f560f9f6ca6f75a3d625ed82a445ce08e80a8f9dec6e65: Status 404 returned error can't find the container with id e67d3c323d8f7cb479f560f9f6ca6f75a3d625ed82a445ce08e80a8f9dec6e65 Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.487072 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.523783 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33378f72-8501-4ce0-bafe-a2584fd27c90-operator-scripts\") pod \"cinder-0156-account-create-update-7k4px\" (UID: \"33378f72-8501-4ce0-bafe-a2584fd27c90\") " pod="openstack/cinder-0156-account-create-update-7k4px" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.523854 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrz4d\" (UniqueName: \"kubernetes.io/projected/33378f72-8501-4ce0-bafe-a2584fd27c90-kube-api-access-zrz4d\") pod \"cinder-0156-account-create-update-7k4px\" (UID: \"33378f72-8501-4ce0-bafe-a2584fd27c90\") " pod="openstack/cinder-0156-account-create-update-7k4px" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.727479 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.728212 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lhh84" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.728562 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zb5gb" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.729447 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33378f72-8501-4ce0-bafe-a2584fd27c90-operator-scripts\") pod \"cinder-0156-account-create-update-7k4px\" (UID: \"33378f72-8501-4ce0-bafe-a2584fd27c90\") " pod="openstack/cinder-0156-account-create-update-7k4px" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.729528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrz4d\" (UniqueName: \"kubernetes.io/projected/33378f72-8501-4ce0-bafe-a2584fd27c90-kube-api-access-zrz4d\") pod \"cinder-0156-account-create-update-7k4px\" (UID: \"33378f72-8501-4ce0-bafe-a2584fd27c90\") " pod="openstack/cinder-0156-account-create-update-7k4px" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.730865 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33378f72-8501-4ce0-bafe-a2584fd27c90-operator-scripts\") pod \"cinder-0156-account-create-update-7k4px\" (UID: \"33378f72-8501-4ce0-bafe-a2584fd27c90\") " pod="openstack/cinder-0156-account-create-update-7k4px" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.732794 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-gdddx"] Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.742419 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gdddx" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.743399 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gdddx"] Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.762622 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.771782 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrz4d\" (UniqueName: \"kubernetes.io/projected/33378f72-8501-4ce0-bafe-a2584fd27c90-kube-api-access-zrz4d\") pod \"cinder-0156-account-create-update-7k4px\" (UID: \"33378f72-8501-4ce0-bafe-a2584fd27c90\") " pod="openstack/cinder-0156-account-create-update-7k4px" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.962744 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e85a9b64-cf9e-4f04-9adc-2500e3f8df60-operator-scripts\") pod \"neutron-db-create-gdddx\" (UID: \"e85a9b64-cf9e-4f04-9adc-2500e3f8df60\") " pod="openstack/neutron-db-create-gdddx" Feb 03 12:29:15 crc kubenswrapper[4820]: I0203 12:29:15.967037 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrr6p\" (UniqueName: \"kubernetes.io/projected/e85a9b64-cf9e-4f04-9adc-2500e3f8df60-kube-api-access-vrr6p\") pod \"neutron-db-create-gdddx\" (UID: \"e85a9b64-cf9e-4f04-9adc-2500e3f8df60\") " pod="openstack/neutron-db-create-gdddx" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.026018 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0156-account-create-update-7k4px" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.039680 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0eaa-account-create-update-r999r"] Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.041443 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0eaa-account-create-update-r999r" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.046277 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.375763 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0eaa-account-create-update-r999r"] Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.399349 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrr6p\" (UniqueName: \"kubernetes.io/projected/e85a9b64-cf9e-4f04-9adc-2500e3f8df60-kube-api-access-vrr6p\") pod \"neutron-db-create-gdddx\" (UID: \"e85a9b64-cf9e-4f04-9adc-2500e3f8df60\") " pod="openstack/neutron-db-create-gdddx" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.399511 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7ca31e7-36f8-449b-b4ca-fca64c76bf77-operator-scripts\") pod \"barbican-0eaa-account-create-update-r999r\" (UID: \"f7ca31e7-36f8-449b-b4ca-fca64c76bf77\") " pod="openstack/barbican-0eaa-account-create-update-r999r" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.399576 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgp5d\" (UniqueName: \"kubernetes.io/projected/f7ca31e7-36f8-449b-b4ca-fca64c76bf77-kube-api-access-lgp5d\") pod \"barbican-0eaa-account-create-update-r999r\" (UID: \"f7ca31e7-36f8-449b-b4ca-fca64c76bf77\") " pod="openstack/barbican-0eaa-account-create-update-r999r" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.399914 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e85a9b64-cf9e-4f04-9adc-2500e3f8df60-operator-scripts\") pod \"neutron-db-create-gdddx\" (UID: \"e85a9b64-cf9e-4f04-9adc-2500e3f8df60\") " pod="openstack/neutron-db-create-gdddx" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.400909 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e85a9b64-cf9e-4f04-9adc-2500e3f8df60-operator-scripts\") pod \"neutron-db-create-gdddx\" (UID: \"e85a9b64-cf9e-4f04-9adc-2500e3f8df60\") " pod="openstack/neutron-db-create-gdddx" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.470979 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-gvhr2"] Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.473009 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gvhr2" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.482416 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gvhr2"] Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.492333 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.492486 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.492589 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.492716 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rvkjd" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.504509 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"e67d3c323d8f7cb479f560f9f6ca6f75a3d625ed82a445ce08e80a8f9dec6e65"} Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.513173 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5jm7\" (UniqueName: \"kubernetes.io/projected/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-kube-api-access-b5jm7\") pod \"keystone-db-sync-gvhr2\" (UID: \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\") " pod="openstack/keystone-db-sync-gvhr2" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.513230 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7ca31e7-36f8-449b-b4ca-fca64c76bf77-operator-scripts\") pod \"barbican-0eaa-account-create-update-r999r\" (UID: \"f7ca31e7-36f8-449b-b4ca-fca64c76bf77\") " pod="openstack/barbican-0eaa-account-create-update-r999r" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.513301 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgp5d\" (UniqueName: \"kubernetes.io/projected/f7ca31e7-36f8-449b-b4ca-fca64c76bf77-kube-api-access-lgp5d\") pod \"barbican-0eaa-account-create-update-r999r\" (UID: \"f7ca31e7-36f8-449b-b4ca-fca64c76bf77\") " pod="openstack/barbican-0eaa-account-create-update-r999r" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.513385 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-config-data\") pod \"keystone-db-sync-gvhr2\" (UID: \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\") " pod="openstack/keystone-db-sync-gvhr2" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.513567 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-combined-ca-bundle\") pod \"keystone-db-sync-gvhr2\" (UID: \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\") " pod="openstack/keystone-db-sync-gvhr2" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.517104 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7ca31e7-36f8-449b-b4ca-fca64c76bf77-operator-scripts\") pod \"barbican-0eaa-account-create-update-r999r\" (UID: \"f7ca31e7-36f8-449b-b4ca-fca64c76bf77\") " pod="openstack/barbican-0eaa-account-create-update-r999r" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.517746 4820 generic.go:334] "Generic (PLEG): container finished" podID="9650bfa4-56cb-482b-935b-9234fbbf7a42" containerID="eee92b9e627a6f88e77bdbc2740db58043e286daf18a6e2594f5ee9b8e73705f" exitCode=0 Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.517780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96p5d-config-lj4n2" event={"ID":"9650bfa4-56cb-482b-935b-9234fbbf7a42","Type":"ContainerDied","Data":"eee92b9e627a6f88e77bdbc2740db58043e286daf18a6e2594f5ee9b8e73705f"} Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.540850 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrr6p\" (UniqueName: \"kubernetes.io/projected/e85a9b64-cf9e-4f04-9adc-2500e3f8df60-kube-api-access-vrr6p\") pod \"neutron-db-create-gdddx\" (UID: \"e85a9b64-cf9e-4f04-9adc-2500e3f8df60\") " pod="openstack/neutron-db-create-gdddx" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.551728 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gdddx" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.557182 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgp5d\" (UniqueName: \"kubernetes.io/projected/f7ca31e7-36f8-449b-b4ca-fca64c76bf77-kube-api-access-lgp5d\") pod \"barbican-0eaa-account-create-update-r999r\" (UID: \"f7ca31e7-36f8-449b-b4ca-fca64c76bf77\") " pod="openstack/barbican-0eaa-account-create-update-r999r" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.697073 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-config-data\") pod \"keystone-db-sync-gvhr2\" (UID: \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\") " pod="openstack/keystone-db-sync-gvhr2" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.697202 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-combined-ca-bundle\") pod \"keystone-db-sync-gvhr2\" (UID: \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\") " pod="openstack/keystone-db-sync-gvhr2" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.697320 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5jm7\" (UniqueName: \"kubernetes.io/projected/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-kube-api-access-b5jm7\") pod \"keystone-db-sync-gvhr2\" (UID: \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\") " pod="openstack/keystone-db-sync-gvhr2" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.698955 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3a2c-account-create-update-7wqgs"] Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.700454 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a2c-account-create-update-7wqgs" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.703140 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-config-data\") pod \"keystone-db-sync-gvhr2\" (UID: \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\") " pod="openstack/keystone-db-sync-gvhr2" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.703212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-combined-ca-bundle\") pod \"keystone-db-sync-gvhr2\" (UID: \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\") " pod="openstack/keystone-db-sync-gvhr2" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.709262 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.709453 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3a2c-account-create-update-7wqgs"] Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.809741 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5jm7\" (UniqueName: \"kubernetes.io/projected/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-kube-api-access-b5jm7\") pod \"keystone-db-sync-gvhr2\" (UID: \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\") " pod="openstack/keystone-db-sync-gvhr2" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.811037 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtrqc\" (UniqueName: \"kubernetes.io/projected/0a7242ff-a34b-4b5f-8200-026040ca1c5d-kube-api-access-rtrqc\") pod \"neutron-3a2c-account-create-update-7wqgs\" (UID: \"0a7242ff-a34b-4b5f-8200-026040ca1c5d\") " pod="openstack/neutron-3a2c-account-create-update-7wqgs" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.816090 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0eaa-account-create-update-r999r" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.827415 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7242ff-a34b-4b5f-8200-026040ca1c5d-operator-scripts\") pod \"neutron-3a2c-account-create-update-7wqgs\" (UID: \"0a7242ff-a34b-4b5f-8200-026040ca1c5d\") " pod="openstack/neutron-3a2c-account-create-update-7wqgs" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.865940 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gvhr2" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.929198 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtrqc\" (UniqueName: \"kubernetes.io/projected/0a7242ff-a34b-4b5f-8200-026040ca1c5d-kube-api-access-rtrqc\") pod \"neutron-3a2c-account-create-update-7wqgs\" (UID: \"0a7242ff-a34b-4b5f-8200-026040ca1c5d\") " pod="openstack/neutron-3a2c-account-create-update-7wqgs" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.929453 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7242ff-a34b-4b5f-8200-026040ca1c5d-operator-scripts\") pod \"neutron-3a2c-account-create-update-7wqgs\" (UID: \"0a7242ff-a34b-4b5f-8200-026040ca1c5d\") " pod="openstack/neutron-3a2c-account-create-update-7wqgs" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.931549 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7242ff-a34b-4b5f-8200-026040ca1c5d-operator-scripts\") pod \"neutron-3a2c-account-create-update-7wqgs\" (UID: \"0a7242ff-a34b-4b5f-8200-026040ca1c5d\") " pod="openstack/neutron-3a2c-account-create-update-7wqgs" Feb 03 12:29:16 crc kubenswrapper[4820]: I0203 12:29:16.971278 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtrqc\" (UniqueName: \"kubernetes.io/projected/0a7242ff-a34b-4b5f-8200-026040ca1c5d-kube-api-access-rtrqc\") pod \"neutron-3a2c-account-create-update-7wqgs\" (UID: \"0a7242ff-a34b-4b5f-8200-026040ca1c5d\") " pod="openstack/neutron-3a2c-account-create-update-7wqgs" Feb 03 12:29:17 crc kubenswrapper[4820]: I0203 12:29:17.042530 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:17 crc kubenswrapper[4820]: I0203 12:29:17.323557 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a2c-account-create-update-7wqgs" Feb 03 12:29:17 crc kubenswrapper[4820]: I0203 12:29:17.962286 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zb5gb"] Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.084447 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-g8wq4"] Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.341325 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.347826 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-lhh84"] Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.444159 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9650bfa4-56cb-482b-935b-9234fbbf7a42-additional-scripts\") pod \"9650bfa4-56cb-482b-935b-9234fbbf7a42\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.444618 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-log-ovn\") pod \"9650bfa4-56cb-482b-935b-9234fbbf7a42\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.444756 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wth7\" (UniqueName: \"kubernetes.io/projected/9650bfa4-56cb-482b-935b-9234fbbf7a42-kube-api-access-9wth7\") pod \"9650bfa4-56cb-482b-935b-9234fbbf7a42\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.444810 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-run\") pod \"9650bfa4-56cb-482b-935b-9234fbbf7a42\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.444875 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-run-ovn\") pod \"9650bfa4-56cb-482b-935b-9234fbbf7a42\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.444939 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9650bfa4-56cb-482b-935b-9234fbbf7a42-scripts\") pod \"9650bfa4-56cb-482b-935b-9234fbbf7a42\" (UID: \"9650bfa4-56cb-482b-935b-9234fbbf7a42\") " Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.456901 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9650bfa4-56cb-482b-935b-9234fbbf7a42-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9650bfa4-56cb-482b-935b-9234fbbf7a42" (UID: "9650bfa4-56cb-482b-935b-9234fbbf7a42"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.457008 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9650bfa4-56cb-482b-935b-9234fbbf7a42" (UID: "9650bfa4-56cb-482b-935b-9234fbbf7a42"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.458159 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9650bfa4-56cb-482b-935b-9234fbbf7a42-scripts" (OuterVolumeSpecName: "scripts") pod "9650bfa4-56cb-482b-935b-9234fbbf7a42" (UID: "9650bfa4-56cb-482b-935b-9234fbbf7a42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.458266 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-run" (OuterVolumeSpecName: "var-run") pod "9650bfa4-56cb-482b-935b-9234fbbf7a42" (UID: "9650bfa4-56cb-482b-935b-9234fbbf7a42"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.458296 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9650bfa4-56cb-482b-935b-9234fbbf7a42" (UID: "9650bfa4-56cb-482b-935b-9234fbbf7a42"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.471121 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9650bfa4-56cb-482b-935b-9234fbbf7a42-kube-api-access-9wth7" (OuterVolumeSpecName: "kube-api-access-9wth7") pod "9650bfa4-56cb-482b-935b-9234fbbf7a42" (UID: "9650bfa4-56cb-482b-935b-9234fbbf7a42"). InnerVolumeSpecName "kube-api-access-9wth7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.496563 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gdddx"] Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.513791 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0156-account-create-update-7k4px"] Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.555119 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wth7\" (UniqueName: \"kubernetes.io/projected/9650bfa4-56cb-482b-935b-9234fbbf7a42-kube-api-access-9wth7\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.555958 4820 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-run\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.556008 4820 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.556021 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9650bfa4-56cb-482b-935b-9234fbbf7a42-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.556031 4820 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9650bfa4-56cb-482b-935b-9234fbbf7a42-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.556040 4820 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9650bfa4-56cb-482b-935b-9234fbbf7a42-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.718344 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zb5gb" event={"ID":"8b750b09-8d9b-49f8-bed1-b20fa047bbc4","Type":"ContainerStarted","Data":"af245bdc8393cc1280ba947489b806df12ae9f8a81ff72ee28a7ea311cb60c8e"} Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.727160 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-lhh84" event={"ID":"217d8f0c-123f-42da-b679-dbefeac99a4f","Type":"ContainerStarted","Data":"2090946cafbadccea2a86052c765a90fee73c1f8d35406ab480854b3c011f058"} Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.742696 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-96p5d-config-lj4n2" event={"ID":"9650bfa4-56cb-482b-935b-9234fbbf7a42","Type":"ContainerDied","Data":"9dcbb91d32ca55d66ac24d34aecd7fa799b04b6e064c899a4648da8aba202912"} Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.742745 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dcbb91d32ca55d66ac24d34aecd7fa799b04b6e064c899a4648da8aba202912" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.742824 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-96p5d-config-lj4n2" Feb 03 12:29:18 crc kubenswrapper[4820]: I0203 12:29:18.758772 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-g8wq4" event={"ID":"b594ebbd-4a60-46ca-92f6-0e4869499849","Type":"ContainerStarted","Data":"a9fa679bd0a21c32302ee6dadab9bb0146cc8b3b3a6a65fea13a40b241bc3f41"} Feb 03 12:29:19 crc kubenswrapper[4820]: I0203 12:29:19.496455 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-96p5d-config-lj4n2"] Feb 03 12:29:19 crc kubenswrapper[4820]: I0203 12:29:19.515473 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-96p5d-config-lj4n2"] Feb 03 12:29:19 crc kubenswrapper[4820]: I0203 12:29:19.545667 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0eaa-account-create-update-r999r"] Feb 03 12:29:19 crc kubenswrapper[4820]: I0203 12:29:19.789303 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gvhr2"] Feb 03 12:29:19 crc kubenswrapper[4820]: I0203 12:29:19.802161 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gdddx" event={"ID":"e85a9b64-cf9e-4f04-9adc-2500e3f8df60","Type":"ContainerStarted","Data":"a20c201845e19ea814c2f754e29b17a3d44f8e6e00dab3834afbdd606dc3f2d2"} Feb 03 12:29:19 crc kubenswrapper[4820]: I0203 12:29:19.815312 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0eaa-account-create-update-r999r" event={"ID":"f7ca31e7-36f8-449b-b4ca-fca64c76bf77","Type":"ContainerStarted","Data":"0f7c231a57bf577fdfba7d14631e06b956b110de0286848f5d70771a85022a2e"} Feb 03 12:29:19 crc kubenswrapper[4820]: I0203 12:29:19.817640 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0156-account-create-update-7k4px" event={"ID":"33378f72-8501-4ce0-bafe-a2584fd27c90","Type":"ContainerStarted","Data":"12b1b138c6bd82851253dd20e8869baa96fda4f74d6413428eaea25c0d40c240"} Feb 03 12:29:19 crc kubenswrapper[4820]: W0203 12:29:19.838946 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66f26be7_edcc_4d55_b7e4_d5f4d16cf58e.slice/crio-ee4e3eff04c970a3feb2eca4abd9a77dbc8d225163fbd1c3f23370ce4dbf6a1c WatchSource:0}: Error finding container ee4e3eff04c970a3feb2eca4abd9a77dbc8d225163fbd1c3f23370ce4dbf6a1c: Status 404 returned error can't find the container with id ee4e3eff04c970a3feb2eca4abd9a77dbc8d225163fbd1c3f23370ce4dbf6a1c Feb 03 12:29:19 crc kubenswrapper[4820]: I0203 12:29:19.846918 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3a2c-account-create-update-7wqgs"] Feb 03 12:29:19 crc kubenswrapper[4820]: W0203 12:29:19.856832 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a7242ff_a34b_4b5f_8200_026040ca1c5d.slice/crio-fabeabbc16b172d519592732c407971df0bf699b8793836446f83b790fa163d6 WatchSource:0}: Error finding container fabeabbc16b172d519592732c407971df0bf699b8793836446f83b790fa163d6: Status 404 returned error can't find the container with id fabeabbc16b172d519592732c407971df0bf699b8793836446f83b790fa163d6 Feb 03 12:29:20 crc kubenswrapper[4820]: I0203 12:29:20.841556 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-lhh84" event={"ID":"217d8f0c-123f-42da-b679-dbefeac99a4f","Type":"ContainerStarted","Data":"55a86545657197ec8f90a7dbf82f8fdc4bfeb41ed556ff8fd366c49474254679"} Feb 03 12:29:20 crc kubenswrapper[4820]: I0203 12:29:20.851381 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a2c-account-create-update-7wqgs" event={"ID":"0a7242ff-a34b-4b5f-8200-026040ca1c5d","Type":"ContainerStarted","Data":"6b5221a92ea33b1d7d4489dc4f2347b465cc9b83af893aeae66736934a699433"} Feb 03 12:29:20 crc kubenswrapper[4820]: I0203 12:29:20.851440 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a2c-account-create-update-7wqgs" event={"ID":"0a7242ff-a34b-4b5f-8200-026040ca1c5d","Type":"ContainerStarted","Data":"fabeabbc16b172d519592732c407971df0bf699b8793836446f83b790fa163d6"} Feb 03 12:29:20 crc kubenswrapper[4820]: I0203 12:29:20.855554 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gvhr2" event={"ID":"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e","Type":"ContainerStarted","Data":"ee4e3eff04c970a3feb2eca4abd9a77dbc8d225163fbd1c3f23370ce4dbf6a1c"} Feb 03 12:29:20 crc kubenswrapper[4820]: I0203 12:29:20.859743 4820 generic.go:334] "Generic (PLEG): container finished" podID="e85a9b64-cf9e-4f04-9adc-2500e3f8df60" containerID="e569fa1a6a713eafd9673f3b2544d9e53f295e77ec2a9dea32740d6348894412" exitCode=0 Feb 03 12:29:20 crc kubenswrapper[4820]: I0203 12:29:20.859834 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gdddx" event={"ID":"e85a9b64-cf9e-4f04-9adc-2500e3f8df60","Type":"ContainerDied","Data":"e569fa1a6a713eafd9673f3b2544d9e53f295e77ec2a9dea32740d6348894412"} Feb 03 12:29:20 crc kubenswrapper[4820]: I0203 12:29:20.870501 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0156-account-create-update-7k4px" event={"ID":"33378f72-8501-4ce0-bafe-a2584fd27c90","Type":"ContainerStarted","Data":"60cd6834439e6ed28d66d9928c52e93817c9636b3a847b4168bd9de8b8d74bd4"} Feb 03 12:29:20 crc kubenswrapper[4820]: I0203 12:29:20.877409 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0eaa-account-create-update-r999r" event={"ID":"f7ca31e7-36f8-449b-b4ca-fca64c76bf77","Type":"ContainerStarted","Data":"6e980d3275a16e52b1522dec77770d6ff0c67de8bf3a8fe55d7ac1e0451dc9c9"} Feb 03 12:29:20 crc kubenswrapper[4820]: I0203 12:29:20.883969 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-lhh84" podStartSLOduration=6.883952332 podStartE2EDuration="6.883952332s" podCreationTimestamp="2026-02-03 12:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:29:20.873922981 +0000 UTC m=+1478.396998855" watchObservedRunningTime="2026-02-03 12:29:20.883952332 +0000 UTC m=+1478.407028196" Feb 03 12:29:20 crc kubenswrapper[4820]: I0203 12:29:20.885406 4820 generic.go:334] "Generic (PLEG): container finished" podID="8b750b09-8d9b-49f8-bed1-b20fa047bbc4" containerID="7d40e515e2bf3126f5f536a3bf5ef5f28153e44e74e1741015b2ade2574386fb" exitCode=0 Feb 03 12:29:20 crc kubenswrapper[4820]: I0203 12:29:20.885453 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zb5gb" event={"ID":"8b750b09-8d9b-49f8-bed1-b20fa047bbc4","Type":"ContainerDied","Data":"7d40e515e2bf3126f5f536a3bf5ef5f28153e44e74e1741015b2ade2574386fb"} Feb 03 12:29:21 crc kubenswrapper[4820]: I0203 12:29:21.527216 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9650bfa4-56cb-482b-935b-9234fbbf7a42" path="/var/lib/kubelet/pods/9650bfa4-56cb-482b-935b-9234fbbf7a42/volumes" Feb 03 12:29:21 crc kubenswrapper[4820]: I0203 12:29:21.529386 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-3a2c-account-create-update-7wqgs" podStartSLOduration=5.5293742649999995 podStartE2EDuration="5.529374265s" podCreationTimestamp="2026-02-03 12:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:29:21.528421109 +0000 UTC m=+1479.051496983" watchObservedRunningTime="2026-02-03 12:29:21.529374265 +0000 UTC m=+1479.052450129" Feb 03 12:29:21 crc kubenswrapper[4820]: I0203 12:29:21.571094 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-0156-account-create-update-7k4px" podStartSLOduration=6.5710762769999995 podStartE2EDuration="6.571076277s" podCreationTimestamp="2026-02-03 12:29:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:29:21.56037551 +0000 UTC m=+1479.083451374" watchObservedRunningTime="2026-02-03 12:29:21.571076277 +0000 UTC m=+1479.094152141" Feb 03 12:29:21 crc kubenswrapper[4820]: I0203 12:29:21.782046 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-0eaa-account-create-update-r999r" podStartSLOduration=6.782022693 podStartE2EDuration="6.782022693s" podCreationTimestamp="2026-02-03 12:29:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:29:21.776025051 +0000 UTC m=+1479.299100915" watchObservedRunningTime="2026-02-03 12:29:21.782022693 +0000 UTC m=+1479.305098557" Feb 03 12:29:22 crc kubenswrapper[4820]: I0203 12:29:22.018128 4820 generic.go:334] "Generic (PLEG): container finished" podID="217d8f0c-123f-42da-b679-dbefeac99a4f" containerID="55a86545657197ec8f90a7dbf82f8fdc4bfeb41ed556ff8fd366c49474254679" exitCode=0 Feb 03 12:29:22 crc kubenswrapper[4820]: I0203 12:29:22.019661 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-lhh84" event={"ID":"217d8f0c-123f-42da-b679-dbefeac99a4f","Type":"ContainerDied","Data":"55a86545657197ec8f90a7dbf82f8fdc4bfeb41ed556ff8fd366c49474254679"} Feb 03 12:29:23 crc kubenswrapper[4820]: I0203 12:29:23.233121 4820 generic.go:334] "Generic (PLEG): container finished" podID="f7ca31e7-36f8-449b-b4ca-fca64c76bf77" containerID="6e980d3275a16e52b1522dec77770d6ff0c67de8bf3a8fe55d7ac1e0451dc9c9" exitCode=0 Feb 03 12:29:23 crc kubenswrapper[4820]: I0203 12:29:23.240411 4820 generic.go:334] "Generic (PLEG): container finished" podID="33378f72-8501-4ce0-bafe-a2584fd27c90" containerID="60cd6834439e6ed28d66d9928c52e93817c9636b3a847b4168bd9de8b8d74bd4" exitCode=0 Feb 03 12:29:23 crc kubenswrapper[4820]: I0203 12:29:23.245629 4820 generic.go:334] "Generic (PLEG): container finished" podID="0a7242ff-a34b-4b5f-8200-026040ca1c5d" containerID="6b5221a92ea33b1d7d4489dc4f2347b465cc9b83af893aeae66736934a699433" exitCode=0 Feb 03 12:29:23 crc kubenswrapper[4820]: I0203 12:29:23.249535 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0eaa-account-create-update-r999r" event={"ID":"f7ca31e7-36f8-449b-b4ca-fca64c76bf77","Type":"ContainerDied","Data":"6e980d3275a16e52b1522dec77770d6ff0c67de8bf3a8fe55d7ac1e0451dc9c9"} Feb 03 12:29:23 crc kubenswrapper[4820]: I0203 12:29:23.249906 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0156-account-create-update-7k4px" event={"ID":"33378f72-8501-4ce0-bafe-a2584fd27c90","Type":"ContainerDied","Data":"60cd6834439e6ed28d66d9928c52e93817c9636b3a847b4168bd9de8b8d74bd4"} Feb 03 12:29:23 crc kubenswrapper[4820]: I0203 12:29:23.250052 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a2c-account-create-update-7wqgs" event={"ID":"0a7242ff-a34b-4b5f-8200-026040ca1c5d","Type":"ContainerDied","Data":"6b5221a92ea33b1d7d4489dc4f2347b465cc9b83af893aeae66736934a699433"} Feb 03 12:29:23 crc kubenswrapper[4820]: I0203 12:29:23.822767 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gdddx" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:23.910035 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e85a9b64-cf9e-4f04-9adc-2500e3f8df60-operator-scripts\") pod \"e85a9b64-cf9e-4f04-9adc-2500e3f8df60\" (UID: \"e85a9b64-cf9e-4f04-9adc-2500e3f8df60\") " Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:23.911200 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e85a9b64-cf9e-4f04-9adc-2500e3f8df60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e85a9b64-cf9e-4f04-9adc-2500e3f8df60" (UID: "e85a9b64-cf9e-4f04-9adc-2500e3f8df60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.012098 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrr6p\" (UniqueName: \"kubernetes.io/projected/e85a9b64-cf9e-4f04-9adc-2500e3f8df60-kube-api-access-vrr6p\") pod \"e85a9b64-cf9e-4f04-9adc-2500e3f8df60\" (UID: \"e85a9b64-cf9e-4f04-9adc-2500e3f8df60\") " Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.012764 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e85a9b64-cf9e-4f04-9adc-2500e3f8df60-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.023456 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85a9b64-cf9e-4f04-9adc-2500e3f8df60-kube-api-access-vrr6p" (OuterVolumeSpecName: "kube-api-access-vrr6p") pod "e85a9b64-cf9e-4f04-9adc-2500e3f8df60" (UID: "e85a9b64-cf9e-4f04-9adc-2500e3f8df60"). InnerVolumeSpecName "kube-api-access-vrr6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.097393 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zb5gb" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.116594 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrr6p\" (UniqueName: \"kubernetes.io/projected/e85a9b64-cf9e-4f04-9adc-2500e3f8df60-kube-api-access-vrr6p\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.147835 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lhh84" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.221731 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b750b09-8d9b-49f8-bed1-b20fa047bbc4-operator-scripts\") pod \"8b750b09-8d9b-49f8-bed1-b20fa047bbc4\" (UID: \"8b750b09-8d9b-49f8-bed1-b20fa047bbc4\") " Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.221885 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs74x\" (UniqueName: \"kubernetes.io/projected/8b750b09-8d9b-49f8-bed1-b20fa047bbc4-kube-api-access-bs74x\") pod \"8b750b09-8d9b-49f8-bed1-b20fa047bbc4\" (UID: \"8b750b09-8d9b-49f8-bed1-b20fa047bbc4\") " Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.222868 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b750b09-8d9b-49f8-bed1-b20fa047bbc4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b750b09-8d9b-49f8-bed1-b20fa047bbc4" (UID: "8b750b09-8d9b-49f8-bed1-b20fa047bbc4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.233903 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b750b09-8d9b-49f8-bed1-b20fa047bbc4-kube-api-access-bs74x" (OuterVolumeSpecName: "kube-api-access-bs74x") pod "8b750b09-8d9b-49f8-bed1-b20fa047bbc4" (UID: "8b750b09-8d9b-49f8-bed1-b20fa047bbc4"). InnerVolumeSpecName "kube-api-access-bs74x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.323961 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217d8f0c-123f-42da-b679-dbefeac99a4f-operator-scripts\") pod \"217d8f0c-123f-42da-b679-dbefeac99a4f\" (UID: \"217d8f0c-123f-42da-b679-dbefeac99a4f\") " Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.324088 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzlk4\" (UniqueName: \"kubernetes.io/projected/217d8f0c-123f-42da-b679-dbefeac99a4f-kube-api-access-hzlk4\") pod \"217d8f0c-123f-42da-b679-dbefeac99a4f\" (UID: \"217d8f0c-123f-42da-b679-dbefeac99a4f\") " Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.324634 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b750b09-8d9b-49f8-bed1-b20fa047bbc4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.324654 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs74x\" (UniqueName: \"kubernetes.io/projected/8b750b09-8d9b-49f8-bed1-b20fa047bbc4-kube-api-access-bs74x\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.328239 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gdddx" event={"ID":"e85a9b64-cf9e-4f04-9adc-2500e3f8df60","Type":"ContainerDied","Data":"a20c201845e19ea814c2f754e29b17a3d44f8e6e00dab3834afbdd606dc3f2d2"} Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.328280 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a20c201845e19ea814c2f754e29b17a3d44f8e6e00dab3834afbdd606dc3f2d2" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.328358 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gdddx" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.330246 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217d8f0c-123f-42da-b679-dbefeac99a4f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "217d8f0c-123f-42da-b679-dbefeac99a4f" (UID: "217d8f0c-123f-42da-b679-dbefeac99a4f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.344592 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217d8f0c-123f-42da-b679-dbefeac99a4f-kube-api-access-hzlk4" (OuterVolumeSpecName: "kube-api-access-hzlk4") pod "217d8f0c-123f-42da-b679-dbefeac99a4f" (UID: "217d8f0c-123f-42da-b679-dbefeac99a4f"). InnerVolumeSpecName "kube-api-access-hzlk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.370214 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zb5gb" event={"ID":"8b750b09-8d9b-49f8-bed1-b20fa047bbc4","Type":"ContainerDied","Data":"af245bdc8393cc1280ba947489b806df12ae9f8a81ff72ee28a7ea311cb60c8e"} Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.370250 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af245bdc8393cc1280ba947489b806df12ae9f8a81ff72ee28a7ea311cb60c8e" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.370325 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zb5gb" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.392217 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lhh84" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.396181 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-lhh84" event={"ID":"217d8f0c-123f-42da-b679-dbefeac99a4f","Type":"ContainerDied","Data":"2090946cafbadccea2a86052c765a90fee73c1f8d35406ab480854b3c011f058"} Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.396276 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2090946cafbadccea2a86052c765a90fee73c1f8d35406ab480854b3c011f058" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.464394 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217d8f0c-123f-42da-b679-dbefeac99a4f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:24 crc kubenswrapper[4820]: I0203 12:29:24.464455 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzlk4\" (UniqueName: \"kubernetes.io/projected/217d8f0c-123f-42da-b679-dbefeac99a4f-kube-api-access-hzlk4\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:25 crc kubenswrapper[4820]: I0203 12:29:25.532357 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0eaa-account-create-update-r999r" Feb 03 12:29:25 crc kubenswrapper[4820]: I0203 12:29:25.536842 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0eaa-account-create-update-r999r" event={"ID":"f7ca31e7-36f8-449b-b4ca-fca64c76bf77","Type":"ContainerDied","Data":"0f7c231a57bf577fdfba7d14631e06b956b110de0286848f5d70771a85022a2e"} Feb 03 12:29:25 crc kubenswrapper[4820]: I0203 12:29:25.536932 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f7c231a57bf577fdfba7d14631e06b956b110de0286848f5d70771a85022a2e" Feb 03 12:29:25 crc kubenswrapper[4820]: I0203 12:29:25.567194 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"432bfa87e51e9d8b33878d4cd4bccf8afbd2927845f042a2c657ec2f6f185706"} Feb 03 12:29:25 crc kubenswrapper[4820]: I0203 12:29:25.567253 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"69b5e81e30e3a1e6c32bbca929fb78119a41637e46f171071e6fd9a93976ea61"} Feb 03 12:29:25 crc kubenswrapper[4820]: I0203 12:29:25.676715 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgp5d\" (UniqueName: \"kubernetes.io/projected/f7ca31e7-36f8-449b-b4ca-fca64c76bf77-kube-api-access-lgp5d\") pod \"f7ca31e7-36f8-449b-b4ca-fca64c76bf77\" (UID: \"f7ca31e7-36f8-449b-b4ca-fca64c76bf77\") " Feb 03 12:29:25 crc kubenswrapper[4820]: I0203 12:29:25.676790 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7ca31e7-36f8-449b-b4ca-fca64c76bf77-operator-scripts\") pod \"f7ca31e7-36f8-449b-b4ca-fca64c76bf77\" (UID: \"f7ca31e7-36f8-449b-b4ca-fca64c76bf77\") " Feb 03 12:29:25 crc kubenswrapper[4820]: I0203 12:29:25.681360 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7ca31e7-36f8-449b-b4ca-fca64c76bf77-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7ca31e7-36f8-449b-b4ca-fca64c76bf77" (UID: "f7ca31e7-36f8-449b-b4ca-fca64c76bf77"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:25 crc kubenswrapper[4820]: I0203 12:29:25.691586 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7ca31e7-36f8-449b-b4ca-fca64c76bf77-kube-api-access-lgp5d" (OuterVolumeSpecName: "kube-api-access-lgp5d") pod "f7ca31e7-36f8-449b-b4ca-fca64c76bf77" (UID: "f7ca31e7-36f8-449b-b4ca-fca64c76bf77"). InnerVolumeSpecName "kube-api-access-lgp5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:25 crc kubenswrapper[4820]: I0203 12:29:25.780717 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgp5d\" (UniqueName: \"kubernetes.io/projected/f7ca31e7-36f8-449b-b4ca-fca64c76bf77-kube-api-access-lgp5d\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:25 crc kubenswrapper[4820]: I0203 12:29:25.781071 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7ca31e7-36f8-449b-b4ca-fca64c76bf77-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:25 crc kubenswrapper[4820]: I0203 12:29:25.965348 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0156-account-create-update-7k4px" Feb 03 12:29:25 crc kubenswrapper[4820]: I0203 12:29:25.976003 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a2c-account-create-update-7wqgs" Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.160076 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33378f72-8501-4ce0-bafe-a2584fd27c90-operator-scripts\") pod \"33378f72-8501-4ce0-bafe-a2584fd27c90\" (UID: \"33378f72-8501-4ce0-bafe-a2584fd27c90\") " Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.160262 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrz4d\" (UniqueName: \"kubernetes.io/projected/33378f72-8501-4ce0-bafe-a2584fd27c90-kube-api-access-zrz4d\") pod \"33378f72-8501-4ce0-bafe-a2584fd27c90\" (UID: \"33378f72-8501-4ce0-bafe-a2584fd27c90\") " Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.160367 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7242ff-a34b-4b5f-8200-026040ca1c5d-operator-scripts\") pod \"0a7242ff-a34b-4b5f-8200-026040ca1c5d\" (UID: \"0a7242ff-a34b-4b5f-8200-026040ca1c5d\") " Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.160411 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtrqc\" (UniqueName: \"kubernetes.io/projected/0a7242ff-a34b-4b5f-8200-026040ca1c5d-kube-api-access-rtrqc\") pod \"0a7242ff-a34b-4b5f-8200-026040ca1c5d\" (UID: \"0a7242ff-a34b-4b5f-8200-026040ca1c5d\") " Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.162214 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33378f72-8501-4ce0-bafe-a2584fd27c90-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33378f72-8501-4ce0-bafe-a2584fd27c90" (UID: "33378f72-8501-4ce0-bafe-a2584fd27c90"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.162697 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7242ff-a34b-4b5f-8200-026040ca1c5d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a7242ff-a34b-4b5f-8200-026040ca1c5d" (UID: "0a7242ff-a34b-4b5f-8200-026040ca1c5d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.166307 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7242ff-a34b-4b5f-8200-026040ca1c5d-kube-api-access-rtrqc" (OuterVolumeSpecName: "kube-api-access-rtrqc") pod "0a7242ff-a34b-4b5f-8200-026040ca1c5d" (UID: "0a7242ff-a34b-4b5f-8200-026040ca1c5d"). InnerVolumeSpecName "kube-api-access-rtrqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.177751 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33378f72-8501-4ce0-bafe-a2584fd27c90-kube-api-access-zrz4d" (OuterVolumeSpecName: "kube-api-access-zrz4d") pod "33378f72-8501-4ce0-bafe-a2584fd27c90" (UID: "33378f72-8501-4ce0-bafe-a2584fd27c90"). InnerVolumeSpecName "kube-api-access-zrz4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.263235 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrz4d\" (UniqueName: \"kubernetes.io/projected/33378f72-8501-4ce0-bafe-a2584fd27c90-kube-api-access-zrz4d\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.263273 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7242ff-a34b-4b5f-8200-026040ca1c5d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.263284 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtrqc\" (UniqueName: \"kubernetes.io/projected/0a7242ff-a34b-4b5f-8200-026040ca1c5d-kube-api-access-rtrqc\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.263293 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33378f72-8501-4ce0-bafe-a2584fd27c90-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.797503 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0156-account-create-update-7k4px" event={"ID":"33378f72-8501-4ce0-bafe-a2584fd27c90","Type":"ContainerDied","Data":"12b1b138c6bd82851253dd20e8869baa96fda4f74d6413428eaea25c0d40c240"} Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.797805 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12b1b138c6bd82851253dd20e8869baa96fda4f74d6413428eaea25c0d40c240" Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.797910 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0156-account-create-update-7k4px" Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.885042 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"9be775ea630256108167fad2ecfb986ede5d2eaea134facee491fbf299fdce31"} Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.920231 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0eaa-account-create-update-r999r" Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.924034 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a2c-account-create-update-7wqgs" Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.926971 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a2c-account-create-update-7wqgs" event={"ID":"0a7242ff-a34b-4b5f-8200-026040ca1c5d","Type":"ContainerDied","Data":"fabeabbc16b172d519592732c407971df0bf699b8793836446f83b790fa163d6"} Feb 03 12:29:26 crc kubenswrapper[4820]: I0203 12:29:26.927044 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fabeabbc16b172d519592732c407971df0bf699b8793836446f83b790fa163d6" Feb 03 12:29:27 crc kubenswrapper[4820]: I0203 12:29:27.124685 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:27 crc kubenswrapper[4820]: I0203 12:29:27.130215 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:27 crc kubenswrapper[4820]: I0203 12:29:27.984310 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"ca36f492d68f9d00749c155dfb79c124651a8e5c44158e72b0dc76eae7a87a4a"} Feb 03 12:29:27 crc kubenswrapper[4820]: I0203 12:29:27.985634 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:34 crc kubenswrapper[4820]: I0203 12:29:34.161206 4820 generic.go:334] "Generic (PLEG): container finished" podID="b897af0d-2b67-45c6-b17f-3686d5a419c0" containerID="d3fee94a4fab8c8fed28cce3e70b696c0512dd6b3cb9216afa62e9ed717bb306" exitCode=0 Feb 03 12:29:34 crc kubenswrapper[4820]: I0203 12:29:34.161302 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zlksq" event={"ID":"b897af0d-2b67-45c6-b17f-3686d5a419c0","Type":"ContainerDied","Data":"d3fee94a4fab8c8fed28cce3e70b696c0512dd6b3cb9216afa62e9ed717bb306"} Feb 03 12:29:35 crc kubenswrapper[4820]: I0203 12:29:35.407543 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:29:35 crc kubenswrapper[4820]: I0203 12:29:35.408141 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="prometheus" containerID="cri-o://5a8edec5e62c8c83f4c0a9e78ca21c0e03b70b98b2306d94b41f019541c2c591" gracePeriod=600 Feb 03 12:29:35 crc kubenswrapper[4820]: I0203 12:29:35.408634 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="thanos-sidecar" containerID="cri-o://be65c3a9461153121494e27523a7bd3e433a2fb7373a387862cdee772f4fe4a5" gracePeriod=600 Feb 03 12:29:35 crc kubenswrapper[4820]: I0203 12:29:35.408687 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="config-reloader" containerID="cri-o://b19b83985ad7a3072169606b1626b31ec7359eade993fb7b14ca4cd26efba095" gracePeriod=600 Feb 03 12:29:36 crc kubenswrapper[4820]: I0203 12:29:36.186932 4820 generic.go:334] "Generic (PLEG): container finished" podID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerID="be65c3a9461153121494e27523a7bd3e433a2fb7373a387862cdee772f4fe4a5" exitCode=0 Feb 03 12:29:36 crc kubenswrapper[4820]: I0203 12:29:36.187182 4820 generic.go:334] "Generic (PLEG): container finished" podID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerID="b19b83985ad7a3072169606b1626b31ec7359eade993fb7b14ca4cd26efba095" exitCode=0 Feb 03 12:29:36 crc kubenswrapper[4820]: I0203 12:29:36.187192 4820 generic.go:334] "Generic (PLEG): container finished" podID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerID="5a8edec5e62c8c83f4c0a9e78ca21c0e03b70b98b2306d94b41f019541c2c591" exitCode=0 Feb 03 12:29:36 crc kubenswrapper[4820]: I0203 12:29:36.187213 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b4c739c-d87f-478c-aec7-07a49da53d46","Type":"ContainerDied","Data":"be65c3a9461153121494e27523a7bd3e433a2fb7373a387862cdee772f4fe4a5"} Feb 03 12:29:36 crc kubenswrapper[4820]: I0203 12:29:36.187237 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b4c739c-d87f-478c-aec7-07a49da53d46","Type":"ContainerDied","Data":"b19b83985ad7a3072169606b1626b31ec7359eade993fb7b14ca4cd26efba095"} Feb 03 12:29:36 crc kubenswrapper[4820]: I0203 12:29:36.187246 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b4c739c-d87f-478c-aec7-07a49da53d46","Type":"ContainerDied","Data":"5a8edec5e62c8c83f4c0a9e78ca21c0e03b70b98b2306d94b41f019541c2c591"} Feb 03 12:29:37 crc kubenswrapper[4820]: I0203 12:29:37.045717 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Feb 03 12:29:41 crc kubenswrapper[4820]: E0203 12:29:41.767826 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Feb 03 12:29:41 crc kubenswrapper[4820]: E0203 12:29:41.768626 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5jm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-gvhr2_openstack(66f26be7-edcc-4d55-b7e4-d5f4d16cf58e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:29:41 crc kubenswrapper[4820]: E0203 12:29:41.769848 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-gvhr2" podUID="66f26be7-edcc-4d55-b7e4-d5f4d16cf58e" Feb 03 12:29:41 crc kubenswrapper[4820]: I0203 12:29:41.896333 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zlksq" Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.092248 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.209440 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-combined-ca-bundle\") pod \"b897af0d-2b67-45c6-b17f-3686d5a419c0\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.210025 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpj24\" (UniqueName: \"kubernetes.io/projected/b897af0d-2b67-45c6-b17f-3686d5a419c0-kube-api-access-vpj24\") pod \"b897af0d-2b67-45c6-b17f-3686d5a419c0\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.210108 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-db-sync-config-data\") pod \"b897af0d-2b67-45c6-b17f-3686d5a419c0\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.210152 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-config-data\") pod \"b897af0d-2b67-45c6-b17f-3686d5a419c0\" (UID: \"b897af0d-2b67-45c6-b17f-3686d5a419c0\") " Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.217805 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b897af0d-2b67-45c6-b17f-3686d5a419c0" (UID: "b897af0d-2b67-45c6-b17f-3686d5a419c0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.220308 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b897af0d-2b67-45c6-b17f-3686d5a419c0-kube-api-access-vpj24" (OuterVolumeSpecName: "kube-api-access-vpj24") pod "b897af0d-2b67-45c6-b17f-3686d5a419c0" (UID: "b897af0d-2b67-45c6-b17f-3686d5a419c0"). InnerVolumeSpecName "kube-api-access-vpj24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.248145 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b897af0d-2b67-45c6-b17f-3686d5a419c0" (UID: "b897af0d-2b67-45c6-b17f-3686d5a419c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.275003 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-config-data" (OuterVolumeSpecName: "config-data") pod "b897af0d-2b67-45c6-b17f-3686d5a419c0" (UID: "b897af0d-2b67-45c6-b17f-3686d5a419c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.302644 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zlksq" Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.303034 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zlksq" event={"ID":"b897af0d-2b67-45c6-b17f-3686d5a419c0","Type":"ContainerDied","Data":"970a95b7943ff4e2dc8c19a3bee1aedd4a2f3157d23140bac9eef67a33476fdf"} Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.303123 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="970a95b7943ff4e2dc8c19a3bee1aedd4a2f3157d23140bac9eef67a33476fdf" Feb 03 12:29:42 crc kubenswrapper[4820]: E0203 12:29:42.303626 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-gvhr2" podUID="66f26be7-edcc-4d55-b7e4-d5f4d16cf58e" Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.313537 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.313570 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpj24\" (UniqueName: \"kubernetes.io/projected/b897af0d-2b67-45c6-b17f-3686d5a419c0-kube-api-access-vpj24\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.313583 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:42 crc kubenswrapper[4820]: I0203 12:29:42.313594 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b897af0d-2b67-45c6-b17f-3686d5a419c0-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.436536 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-vbrpj"] Feb 03 12:29:43 crc kubenswrapper[4820]: E0203 12:29:43.437190 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b897af0d-2b67-45c6-b17f-3686d5a419c0" containerName="glance-db-sync" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437214 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b897af0d-2b67-45c6-b17f-3686d5a419c0" containerName="glance-db-sync" Feb 03 12:29:43 crc kubenswrapper[4820]: E0203 12:29:43.437238 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9650bfa4-56cb-482b-935b-9234fbbf7a42" containerName="ovn-config" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437244 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9650bfa4-56cb-482b-935b-9234fbbf7a42" containerName="ovn-config" Feb 03 12:29:43 crc kubenswrapper[4820]: E0203 12:29:43.437258 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b750b09-8d9b-49f8-bed1-b20fa047bbc4" containerName="mariadb-database-create" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437264 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b750b09-8d9b-49f8-bed1-b20fa047bbc4" containerName="mariadb-database-create" Feb 03 12:29:43 crc kubenswrapper[4820]: E0203 12:29:43.437271 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217d8f0c-123f-42da-b679-dbefeac99a4f" containerName="mariadb-database-create" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437277 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="217d8f0c-123f-42da-b679-dbefeac99a4f" containerName="mariadb-database-create" Feb 03 12:29:43 crc kubenswrapper[4820]: E0203 12:29:43.437291 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33378f72-8501-4ce0-bafe-a2584fd27c90" containerName="mariadb-account-create-update" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437297 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="33378f72-8501-4ce0-bafe-a2584fd27c90" containerName="mariadb-account-create-update" Feb 03 12:29:43 crc kubenswrapper[4820]: E0203 12:29:43.437313 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7242ff-a34b-4b5f-8200-026040ca1c5d" containerName="mariadb-account-create-update" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437319 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7242ff-a34b-4b5f-8200-026040ca1c5d" containerName="mariadb-account-create-update" Feb 03 12:29:43 crc kubenswrapper[4820]: E0203 12:29:43.437330 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ca31e7-36f8-449b-b4ca-fca64c76bf77" containerName="mariadb-account-create-update" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437336 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ca31e7-36f8-449b-b4ca-fca64c76bf77" containerName="mariadb-account-create-update" Feb 03 12:29:43 crc kubenswrapper[4820]: E0203 12:29:43.437344 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85a9b64-cf9e-4f04-9adc-2500e3f8df60" containerName="mariadb-database-create" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437350 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85a9b64-cf9e-4f04-9adc-2500e3f8df60" containerName="mariadb-database-create" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437525 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85a9b64-cf9e-4f04-9adc-2500e3f8df60" containerName="mariadb-database-create" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437538 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7242ff-a34b-4b5f-8200-026040ca1c5d" containerName="mariadb-account-create-update" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437547 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7ca31e7-36f8-449b-b4ca-fca64c76bf77" containerName="mariadb-account-create-update" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437555 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9650bfa4-56cb-482b-935b-9234fbbf7a42" containerName="ovn-config" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437565 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="33378f72-8501-4ce0-bafe-a2584fd27c90" containerName="mariadb-account-create-update" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437574 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b897af0d-2b67-45c6-b17f-3686d5a419c0" containerName="glance-db-sync" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437583 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b750b09-8d9b-49f8-bed1-b20fa047bbc4" containerName="mariadb-database-create" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.437593 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="217d8f0c-123f-42da-b679-dbefeac99a4f" containerName="mariadb-database-create" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.438636 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.449762 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-vbrpj"] Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.677253 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.677398 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.677434 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xxdc\" (UniqueName: \"kubernetes.io/projected/973038d9-ba67-4fbe-8239-ed6e47f3cf90-kube-api-access-2xxdc\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.677504 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-config\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.677533 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.779354 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.779412 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xxdc\" (UniqueName: \"kubernetes.io/projected/973038d9-ba67-4fbe-8239-ed6e47f3cf90-kube-api-access-2xxdc\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.779489 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-config\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.779516 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.779573 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.780346 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.780433 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.781075 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-config\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.781188 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.802020 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xxdc\" (UniqueName: \"kubernetes.io/projected/973038d9-ba67-4fbe-8239-ed6e47f3cf90-kube-api-access-2xxdc\") pod \"dnsmasq-dns-5b946c75cc-vbrpj\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:43 crc kubenswrapper[4820]: I0203 12:29:43.898518 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:29:50 crc kubenswrapper[4820]: I0203 12:29:50.048055 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:29:50 crc kubenswrapper[4820]: I0203 12:29:50.048912 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:52 crc kubenswrapper[4820]: E0203 12:29:52.029795 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Feb 03 12:29:52 crc kubenswrapper[4820]: E0203 12:29:52.030681 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Feb 03 12:29:52 crc kubenswrapper[4820]: E0203 12:29:52.030869 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2zpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-g8wq4_openstack(b594ebbd-4a60-46ca-92f6-0e4869499849): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:29:52 crc kubenswrapper[4820]: E0203 12:29:52.036187 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-g8wq4" podUID="b594ebbd-4a60-46ca-92f6-0e4869499849" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.085880 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.135493 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cnmd\" (UniqueName: \"kubernetes.io/projected/7b4c739c-d87f-478c-aec7-07a49da53d46-kube-api-access-5cnmd\") pod \"7b4c739c-d87f-478c-aec7-07a49da53d46\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.138879 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") pod \"7b4c739c-d87f-478c-aec7-07a49da53d46\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.139076 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-1\") pod \"7b4c739c-d87f-478c-aec7-07a49da53d46\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.139111 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-web-config\") pod \"7b4c739c-d87f-478c-aec7-07a49da53d46\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.139178 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-0\") pod \"7b4c739c-d87f-478c-aec7-07a49da53d46\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.139231 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-thanos-prometheus-http-client-file\") pod \"7b4c739c-d87f-478c-aec7-07a49da53d46\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.139350 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-2\") pod \"7b4c739c-d87f-478c-aec7-07a49da53d46\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.139382 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-config\") pod \"7b4c739c-d87f-478c-aec7-07a49da53d46\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.139434 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b4c739c-d87f-478c-aec7-07a49da53d46-config-out\") pod \"7b4c739c-d87f-478c-aec7-07a49da53d46\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.139490 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b4c739c-d87f-478c-aec7-07a49da53d46-tls-assets\") pod \"7b4c739c-d87f-478c-aec7-07a49da53d46\" (UID: \"7b4c739c-d87f-478c-aec7-07a49da53d46\") " Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.142254 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "7b4c739c-d87f-478c-aec7-07a49da53d46" (UID: "7b4c739c-d87f-478c-aec7-07a49da53d46"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.142369 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b4c739c-d87f-478c-aec7-07a49da53d46-kube-api-access-5cnmd" (OuterVolumeSpecName: "kube-api-access-5cnmd") pod "7b4c739c-d87f-478c-aec7-07a49da53d46" (UID: "7b4c739c-d87f-478c-aec7-07a49da53d46"). InnerVolumeSpecName "kube-api-access-5cnmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.142633 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "7b4c739c-d87f-478c-aec7-07a49da53d46" (UID: "7b4c739c-d87f-478c-aec7-07a49da53d46"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.143128 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "7b4c739c-d87f-478c-aec7-07a49da53d46" (UID: "7b4c739c-d87f-478c-aec7-07a49da53d46"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.146144 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "7b4c739c-d87f-478c-aec7-07a49da53d46" (UID: "7b4c739c-d87f-478c-aec7-07a49da53d46"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.147755 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b4c739c-d87f-478c-aec7-07a49da53d46-config-out" (OuterVolumeSpecName: "config-out") pod "7b4c739c-d87f-478c-aec7-07a49da53d46" (UID: "7b4c739c-d87f-478c-aec7-07a49da53d46"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.147798 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-config" (OuterVolumeSpecName: "config") pod "7b4c739c-d87f-478c-aec7-07a49da53d46" (UID: "7b4c739c-d87f-478c-aec7-07a49da53d46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.148376 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b4c739c-d87f-478c-aec7-07a49da53d46-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "7b4c739c-d87f-478c-aec7-07a49da53d46" (UID: "7b4c739c-d87f-478c-aec7-07a49da53d46"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.243983 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.244039 4820 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.244060 4820 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b4c739c-d87f-478c-aec7-07a49da53d46-config-out\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.244076 4820 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b4c739c-d87f-478c-aec7-07a49da53d46-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.244094 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cnmd\" (UniqueName: \"kubernetes.io/projected/7b4c739c-d87f-478c-aec7-07a49da53d46-kube-api-access-5cnmd\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.244109 4820 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.244125 4820 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7b4c739c-d87f-478c-aec7-07a49da53d46-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.244145 4820 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.375883 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "7b4c739c-d87f-478c-aec7-07a49da53d46" (UID: "7b4c739c-d87f-478c-aec7-07a49da53d46"). InnerVolumeSpecName "pvc-9d241bdd-b4a8-44a7-af98-0d864047887a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.379389 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-web-config" (OuterVolumeSpecName: "web-config") pod "7b4c739c-d87f-478c-aec7-07a49da53d46" (UID: "7b4c739c-d87f-478c-aec7-07a49da53d46"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.467126 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") on node \"crc\" " Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.467171 4820 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b4c739c-d87f-478c-aec7-07a49da53d46-web-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.493372 4820 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.493530 4820 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9d241bdd-b4a8-44a7-af98-0d864047887a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a") on node "crc" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.568943 4820 reconciler_common.go:293] "Volume detached for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.722864 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-vbrpj"] Feb 03 12:29:52 crc kubenswrapper[4820]: W0203 12:29:52.733218 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod973038d9_ba67_4fbe_8239_ed6e47f3cf90.slice/crio-61492a8d572ac3a64c55587a704cb8d18e9e15a34aac3f50012364068e781124 WatchSource:0}: Error finding container 61492a8d572ac3a64c55587a704cb8d18e9e15a34aac3f50012364068e781124: Status 404 returned error can't find the container with id 61492a8d572ac3a64c55587a704cb8d18e9e15a34aac3f50012364068e781124 Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.811299 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7b4c739c-d87f-478c-aec7-07a49da53d46","Type":"ContainerDied","Data":"5a62058bb2d742ff3950a86fbea0cfa431b4e4363dce8504c20e35118d6654b3"} Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.811397 4820 scope.go:117] "RemoveContainer" containerID="be65c3a9461153121494e27523a7bd3e433a2fb7373a387862cdee772f4fe4a5" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.811345 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.816799 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" event={"ID":"973038d9-ba67-4fbe-8239-ed6e47f3cf90","Type":"ContainerStarted","Data":"61492a8d572ac3a64c55587a704cb8d18e9e15a34aac3f50012364068e781124"} Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.828514 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"1a249585f53a57659ff4079b0db1f4e50f1f084ffa6c54d5eebd229d8a19592f"} Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.828564 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"35022b0a58e1a23d5c3d3cf548e6088771f8d87888f489cd97250f16884a2c99"} Feb 03 12:29:52 crc kubenswrapper[4820]: E0203 12:29:52.839558 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-g8wq4" podUID="b594ebbd-4a60-46ca-92f6-0e4869499849" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.859033 4820 scope.go:117] "RemoveContainer" containerID="b19b83985ad7a3072169606b1626b31ec7359eade993fb7b14ca4cd26efba095" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.891360 4820 scope.go:117] "RemoveContainer" containerID="5a8edec5e62c8c83f4c0a9e78ca21c0e03b70b98b2306d94b41f019541c2c591" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.894707 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.916993 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.922719 4820 scope.go:117] "RemoveContainer" containerID="a08002938c4907859ccd30a635dec360508222e259a56a5925c02b92bb6c2d7e" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.935957 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:29:52 crc kubenswrapper[4820]: E0203 12:29:52.936537 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="config-reloader" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.936553 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="config-reloader" Feb 03 12:29:52 crc kubenswrapper[4820]: E0203 12:29:52.936577 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="prometheus" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.936582 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="prometheus" Feb 03 12:29:52 crc kubenswrapper[4820]: E0203 12:29:52.936595 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="thanos-sidecar" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.936601 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="thanos-sidecar" Feb 03 12:29:52 crc kubenswrapper[4820]: E0203 12:29:52.936625 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="init-config-reloader" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.936632 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="init-config-reloader" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.936806 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="thanos-sidecar" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.936824 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="config-reloader" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.936845 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="prometheus" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.947141 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.947322 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.952865 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.952983 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.953188 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.952865 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.953417 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.953459 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7hkds" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.953598 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.954919 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 03 12:29:52 crc kubenswrapper[4820]: I0203 12:29:52.972065 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.082016 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.082382 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.082430 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.082459 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.082476 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.082501 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-config\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.082532 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.082562 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.082582 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.082614 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.082645 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.082681 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgftf\" (UniqueName: \"kubernetes.io/projected/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-kube-api-access-vgftf\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.082712 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.244797 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.244907 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.244945 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.244969 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.245003 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-config\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.245042 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.245086 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.245109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.245142 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.245175 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.245223 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgftf\" (UniqueName: \"kubernetes.io/projected/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-kube-api-access-vgftf\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.245248 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.245290 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.246446 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.251924 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.255195 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.255830 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.255931 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.255956 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f3f5a1a6665956e69b060824525b6e14f682a7b73f5e11dfb7e9e70ac872e663/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.259143 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.269828 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.278159 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.278618 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.280709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-config\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.281454 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" path="/var/lib/kubelet/pods/7b4c739c-d87f-478c-aec7-07a49da53d46/volumes" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.304864 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.323718 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgftf\" (UniqueName: \"kubernetes.io/projected/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-kube-api-access-vgftf\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.338257 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:53 crc kubenswrapper[4820]: I0203 12:29:53.584990 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") pod \"prometheus-metric-storage-0\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:54 crc kubenswrapper[4820]: I0203 12:29:54.111760 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 03 12:29:54 crc kubenswrapper[4820]: I0203 12:29:54.149203 4820 generic.go:334] "Generic (PLEG): container finished" podID="973038d9-ba67-4fbe-8239-ed6e47f3cf90" containerID="3d55912c4c392770ea02f041b0ae35fb92606e43d8dffbbf265234e83ae8f744" exitCode=0 Feb 03 12:29:54 crc kubenswrapper[4820]: I0203 12:29:54.149336 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" event={"ID":"973038d9-ba67-4fbe-8239-ed6e47f3cf90","Type":"ContainerDied","Data":"3d55912c4c392770ea02f041b0ae35fb92606e43d8dffbbf265234e83ae8f744"} Feb 03 12:29:54 crc kubenswrapper[4820]: I0203 12:29:54.216268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"8b9e2482ebf5184952f84564eaf8b59b79ea626c612960fc168810dcb071b186"} Feb 03 12:29:54 crc kubenswrapper[4820]: I0203 12:29:54.976705 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:29:54 crc kubenswrapper[4820]: W0203 12:29:54.978751 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0658b201_7c4e_4d71_ba2d_c2cb5dee1553.slice/crio-018c7f8d49035e2389fbe253cc45d03b2e94d45ec847c8e01a5bbf78491681d1 WatchSource:0}: Error finding container 018c7f8d49035e2389fbe253cc45d03b2e94d45ec847c8e01a5bbf78491681d1: Status 404 returned error can't find the container with id 018c7f8d49035e2389fbe253cc45d03b2e94d45ec847c8e01a5bbf78491681d1 Feb 03 12:29:55 crc kubenswrapper[4820]: I0203 12:29:55.046015 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7b4c739c-d87f-478c-aec7-07a49da53d46" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:29:55 crc kubenswrapper[4820]: I0203 12:29:55.234425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"08f951c8c74f0de58d7c6143dc5537b6a1883c78a0b7e155cbf6b430cb3246be"} Feb 03 12:29:55 crc kubenswrapper[4820]: I0203 12:29:55.235871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0658b201-7c4e-4d71-ba2d-c2cb5dee1553","Type":"ContainerStarted","Data":"018c7f8d49035e2389fbe253cc45d03b2e94d45ec847c8e01a5bbf78491681d1"} Feb 03 12:29:59 crc kubenswrapper[4820]: I0203 12:29:59.371579 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0658b201-7c4e-4d71-ba2d-c2cb5dee1553","Type":"ContainerStarted","Data":"203e10bb8ec4cd8d38391a67e6322fed75f528ffc84047efd3a54eb07c57c7ab"} Feb 03 12:30:00 crc kubenswrapper[4820]: I0203 12:30:00.242158 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf"] Feb 03 12:30:00 crc kubenswrapper[4820]: I0203 12:30:00.244685 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" Feb 03 12:30:00 crc kubenswrapper[4820]: I0203 12:30:00.254252 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 12:30:00 crc kubenswrapper[4820]: I0203 12:30:00.254290 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 12:30:00 crc kubenswrapper[4820]: I0203 12:30:00.254981 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf"] Feb 03 12:30:00 crc kubenswrapper[4820]: I0203 12:30:00.702973 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/daf546c3-f063-47ae-8ab1-d9ee325ebae9-config-volume\") pod \"collect-profiles-29502030-q2tlf\" (UID: \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" Feb 03 12:30:00 crc kubenswrapper[4820]: I0203 12:30:00.703292 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/daf546c3-f063-47ae-8ab1-d9ee325ebae9-secret-volume\") pod \"collect-profiles-29502030-q2tlf\" (UID: \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" Feb 03 12:30:00 crc kubenswrapper[4820]: I0203 12:30:00.703347 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngvb8\" (UniqueName: \"kubernetes.io/projected/daf546c3-f063-47ae-8ab1-d9ee325ebae9-kube-api-access-ngvb8\") pod \"collect-profiles-29502030-q2tlf\" (UID: \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" Feb 03 12:30:00 crc kubenswrapper[4820]: I0203 12:30:00.805308 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/daf546c3-f063-47ae-8ab1-d9ee325ebae9-secret-volume\") pod \"collect-profiles-29502030-q2tlf\" (UID: \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" Feb 03 12:30:00 crc kubenswrapper[4820]: I0203 12:30:00.805361 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngvb8\" (UniqueName: \"kubernetes.io/projected/daf546c3-f063-47ae-8ab1-d9ee325ebae9-kube-api-access-ngvb8\") pod \"collect-profiles-29502030-q2tlf\" (UID: \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" Feb 03 12:30:00 crc kubenswrapper[4820]: I0203 12:30:00.805400 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/daf546c3-f063-47ae-8ab1-d9ee325ebae9-config-volume\") pod \"collect-profiles-29502030-q2tlf\" (UID: \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" Feb 03 12:30:00 crc kubenswrapper[4820]: I0203 12:30:00.806660 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/daf546c3-f063-47ae-8ab1-d9ee325ebae9-config-volume\") pod \"collect-profiles-29502030-q2tlf\" (UID: \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" Feb 03 12:30:00 crc kubenswrapper[4820]: I0203 12:30:00.814495 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/daf546c3-f063-47ae-8ab1-d9ee325ebae9-secret-volume\") pod \"collect-profiles-29502030-q2tlf\" (UID: \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" Feb 03 12:30:00 crc kubenswrapper[4820]: I0203 12:30:00.825351 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngvb8\" (UniqueName: \"kubernetes.io/projected/daf546c3-f063-47ae-8ab1-d9ee325ebae9-kube-api-access-ngvb8\") pod \"collect-profiles-29502030-q2tlf\" (UID: \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" Feb 03 12:30:01 crc kubenswrapper[4820]: I0203 12:30:01.030815 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" Feb 03 12:30:01 crc kubenswrapper[4820]: I0203 12:30:01.650109 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf"] Feb 03 12:30:01 crc kubenswrapper[4820]: W0203 12:30:01.721276 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddaf546c3_f063_47ae_8ab1_d9ee325ebae9.slice/crio-33ed529e4435e50d2fee6d9df5da4419e1c99290636aeb3e2fa594a9d3f98b8b WatchSource:0}: Error finding container 33ed529e4435e50d2fee6d9df5da4419e1c99290636aeb3e2fa594a9d3f98b8b: Status 404 returned error can't find the container with id 33ed529e4435e50d2fee6d9df5da4419e1c99290636aeb3e2fa594a9d3f98b8b Feb 03 12:30:01 crc kubenswrapper[4820]: I0203 12:30:01.732664 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gvhr2" event={"ID":"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e","Type":"ContainerStarted","Data":"4d846a574f926a8cb91628cc3125d07e9c8f1a3178d0302150efc81f28ba7de0"} Feb 03 12:30:01 crc kubenswrapper[4820]: I0203 12:30:01.744129 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" event={"ID":"973038d9-ba67-4fbe-8239-ed6e47f3cf90","Type":"ContainerStarted","Data":"7919a5c460982351bebc77e30c190b4d45bc270b95b9038a47b7ceefcc038146"} Feb 03 12:30:01 crc kubenswrapper[4820]: I0203 12:30:01.744814 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:30:01 crc kubenswrapper[4820]: I0203 12:30:01.761766 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-gvhr2" podStartSLOduration=5.607056095 podStartE2EDuration="46.761616804s" podCreationTimestamp="2026-02-03 12:29:15 +0000 UTC" firstStartedPulling="2026-02-03 12:29:19.842368135 +0000 UTC m=+1477.365443999" lastFinishedPulling="2026-02-03 12:30:00.996928844 +0000 UTC m=+1518.520004708" observedRunningTime="2026-02-03 12:30:01.75073636 +0000 UTC m=+1519.273812244" watchObservedRunningTime="2026-02-03 12:30:01.761616804 +0000 UTC m=+1519.284692668" Feb 03 12:30:02 crc kubenswrapper[4820]: I0203 12:30:02.067728 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" podStartSLOduration=19.06770412 podStartE2EDuration="19.06770412s" podCreationTimestamp="2026-02-03 12:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:30:01.777085322 +0000 UTC m=+1519.300161206" watchObservedRunningTime="2026-02-03 12:30:02.06770412 +0000 UTC m=+1519.590779974" Feb 03 12:30:02 crc kubenswrapper[4820]: I0203 12:30:02.766781 4820 generic.go:334] "Generic (PLEG): container finished" podID="daf546c3-f063-47ae-8ab1-d9ee325ebae9" containerID="b204e150f2c7bc3f9c89ce24e71e2e3ccf127e220c69d16114ac279fd5ba17e5" exitCode=0 Feb 03 12:30:02 crc kubenswrapper[4820]: I0203 12:30:02.767096 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" event={"ID":"daf546c3-f063-47ae-8ab1-d9ee325ebae9","Type":"ContainerDied","Data":"b204e150f2c7bc3f9c89ce24e71e2e3ccf127e220c69d16114ac279fd5ba17e5"} Feb 03 12:30:02 crc kubenswrapper[4820]: I0203 12:30:02.767231 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" event={"ID":"daf546c3-f063-47ae-8ab1-d9ee325ebae9","Type":"ContainerStarted","Data":"33ed529e4435e50d2fee6d9df5da4419e1c99290636aeb3e2fa594a9d3f98b8b"} Feb 03 12:30:02 crc kubenswrapper[4820]: I0203 12:30:02.776693 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"23a6e554f901405f6e1439cfc76f9c3b41e80fdf81e45937738efff7379516cd"} Feb 03 12:30:02 crc kubenswrapper[4820]: I0203 12:30:02.776750 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"61873e69f9a8cd5420fe11296058b5536e960451da92b2da27590234aa6d1237"} Feb 03 12:30:03 crc kubenswrapper[4820]: I0203 12:30:03.968399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"6bbafcb409bf50b883616a86c6ddf90a7978af3f8941937556ed89ce9bff40a2"} Feb 03 12:30:03 crc kubenswrapper[4820]: I0203 12:30:03.968710 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"d7043b06b1f06cc45fe2a362fdda838f7f948341b3ce858f01d1e198bb86ed93"} Feb 03 12:30:03 crc kubenswrapper[4820]: I0203 12:30:03.968741 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"444a80f868383d7dce08c7170bb2697599c4c65a48f7d1408841e3e206b0ce99"} Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.336258 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.526819 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/daf546c3-f063-47ae-8ab1-d9ee325ebae9-config-volume\") pod \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\" (UID: \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\") " Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.526968 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/daf546c3-f063-47ae-8ab1-d9ee325ebae9-secret-volume\") pod \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\" (UID: \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\") " Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.527073 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvb8\" (UniqueName: \"kubernetes.io/projected/daf546c3-f063-47ae-8ab1-d9ee325ebae9-kube-api-access-ngvb8\") pod \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\" (UID: \"daf546c3-f063-47ae-8ab1-d9ee325ebae9\") " Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.527511 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daf546c3-f063-47ae-8ab1-d9ee325ebae9-config-volume" (OuterVolumeSpecName: "config-volume") pod "daf546c3-f063-47ae-8ab1-d9ee325ebae9" (UID: "daf546c3-f063-47ae-8ab1-d9ee325ebae9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.527636 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/daf546c3-f063-47ae-8ab1-d9ee325ebae9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.533538 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf546c3-f063-47ae-8ab1-d9ee325ebae9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "daf546c3-f063-47ae-8ab1-d9ee325ebae9" (UID: "daf546c3-f063-47ae-8ab1-d9ee325ebae9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.533798 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daf546c3-f063-47ae-8ab1-d9ee325ebae9-kube-api-access-ngvb8" (OuterVolumeSpecName: "kube-api-access-ngvb8") pod "daf546c3-f063-47ae-8ab1-d9ee325ebae9" (UID: "daf546c3-f063-47ae-8ab1-d9ee325ebae9"). InnerVolumeSpecName "kube-api-access-ngvb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.668758 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvb8\" (UniqueName: \"kubernetes.io/projected/daf546c3-f063-47ae-8ab1-d9ee325ebae9-kube-api-access-ngvb8\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.668804 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/daf546c3-f063-47ae-8ab1-d9ee325ebae9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.979670 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.979653 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf" event={"ID":"daf546c3-f063-47ae-8ab1-d9ee325ebae9","Type":"ContainerDied","Data":"33ed529e4435e50d2fee6d9df5da4419e1c99290636aeb3e2fa594a9d3f98b8b"} Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.980147 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33ed529e4435e50d2fee6d9df5da4419e1c99290636aeb3e2fa594a9d3f98b8b" Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.989408 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"1cd4f2d465f48671903c51f1db4501b7e7310394db31d41de2e8b23f2ef4d470"} Feb 03 12:30:04 crc kubenswrapper[4820]: I0203 12:30:04.989457 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4eb10ed-a945-4b23-8fb3-62022a90e09f","Type":"ContainerStarted","Data":"ba4da20bde1ef2aa2a53bcad29971b68aae775c2a3fc17563bb5516dc90896c2"} Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.084182 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.768161297 podStartE2EDuration="1m25.084155946s" podCreationTimestamp="2026-02-03 12:28:40 +0000 UTC" firstStartedPulling="2026-02-03 12:29:15.481954796 +0000 UTC m=+1473.005030660" lastFinishedPulling="2026-02-03 12:30:01.797949445 +0000 UTC m=+1519.321025309" observedRunningTime="2026-02-03 12:30:05.036706895 +0000 UTC m=+1522.559782769" watchObservedRunningTime="2026-02-03 12:30:05.084155946 +0000 UTC m=+1522.607231810" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.610265 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-vbrpj"] Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.610498 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" podUID="973038d9-ba67-4fbe-8239-ed6e47f3cf90" containerName="dnsmasq-dns" containerID="cri-o://7919a5c460982351bebc77e30c190b4d45bc270b95b9038a47b7ceefcc038146" gracePeriod=10 Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.611762 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.644308 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8bv2r"] Feb 03 12:30:05 crc kubenswrapper[4820]: E0203 12:30:05.644767 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf546c3-f063-47ae-8ab1-d9ee325ebae9" containerName="collect-profiles" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.644785 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf546c3-f063-47ae-8ab1-d9ee325ebae9" containerName="collect-profiles" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.645064 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="daf546c3-f063-47ae-8ab1-d9ee325ebae9" containerName="collect-profiles" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.646431 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.650647 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.654360 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.654417 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.654513 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-config\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.654639 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqmm\" (UniqueName: \"kubernetes.io/projected/2724e07a-0753-44be-93f3-4ecc9696f686-kube-api-access-6jqmm\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.654673 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.654719 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.678113 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8bv2r"] Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.773420 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.773484 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.773533 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-config\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.773602 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jqmm\" (UniqueName: \"kubernetes.io/projected/2724e07a-0753-44be-93f3-4ecc9696f686-kube-api-access-6jqmm\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.773635 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.773664 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.774703 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.775544 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.776096 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-config\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.776481 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.779065 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:05 crc kubenswrapper[4820]: I0203 12:30:05.802278 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jqmm\" (UniqueName: \"kubernetes.io/projected/2724e07a-0753-44be-93f3-4ecc9696f686-kube-api-access-6jqmm\") pod \"dnsmasq-dns-74f6bcbc87-8bv2r\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.041269 4820 generic.go:334] "Generic (PLEG): container finished" podID="973038d9-ba67-4fbe-8239-ed6e47f3cf90" containerID="7919a5c460982351bebc77e30c190b4d45bc270b95b9038a47b7ceefcc038146" exitCode=0 Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.042857 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" event={"ID":"973038d9-ba67-4fbe-8239-ed6e47f3cf90","Type":"ContainerDied","Data":"7919a5c460982351bebc77e30c190b4d45bc270b95b9038a47b7ceefcc038146"} Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.095821 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.279963 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.441873 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-config\") pod \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.441950 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-ovsdbserver-sb\") pod \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.442106 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-dns-svc\") pod \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.442177 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-ovsdbserver-nb\") pod \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.442276 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xxdc\" (UniqueName: \"kubernetes.io/projected/973038d9-ba67-4fbe-8239-ed6e47f3cf90-kube-api-access-2xxdc\") pod \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\" (UID: \"973038d9-ba67-4fbe-8239-ed6e47f3cf90\") " Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.451581 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973038d9-ba67-4fbe-8239-ed6e47f3cf90-kube-api-access-2xxdc" (OuterVolumeSpecName: "kube-api-access-2xxdc") pod "973038d9-ba67-4fbe-8239-ed6e47f3cf90" (UID: "973038d9-ba67-4fbe-8239-ed6e47f3cf90"). InnerVolumeSpecName "kube-api-access-2xxdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.502587 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "973038d9-ba67-4fbe-8239-ed6e47f3cf90" (UID: "973038d9-ba67-4fbe-8239-ed6e47f3cf90"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.515529 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-config" (OuterVolumeSpecName: "config") pod "973038d9-ba67-4fbe-8239-ed6e47f3cf90" (UID: "973038d9-ba67-4fbe-8239-ed6e47f3cf90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.515575 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "973038d9-ba67-4fbe-8239-ed6e47f3cf90" (UID: "973038d9-ba67-4fbe-8239-ed6e47f3cf90"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.546337 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.546377 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.546388 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.546398 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xxdc\" (UniqueName: \"kubernetes.io/projected/973038d9-ba67-4fbe-8239-ed6e47f3cf90-kube-api-access-2xxdc\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.567609 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "973038d9-ba67-4fbe-8239-ed6e47f3cf90" (UID: "973038d9-ba67-4fbe-8239-ed6e47f3cf90"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:06 crc kubenswrapper[4820]: I0203 12:30:06.911495 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/973038d9-ba67-4fbe-8239-ed6e47f3cf90-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:07 crc kubenswrapper[4820]: I0203 12:30:06.999097 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8bv2r"] Feb 03 12:30:07 crc kubenswrapper[4820]: I0203 12:30:07.077916 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" event={"ID":"2724e07a-0753-44be-93f3-4ecc9696f686","Type":"ContainerStarted","Data":"0c0e5f6300108ee468115f1b1e167c51b0ccc2933f6eea17174a88e222050ae7"} Feb 03 12:30:07 crc kubenswrapper[4820]: I0203 12:30:07.087486 4820 generic.go:334] "Generic (PLEG): container finished" podID="66f26be7-edcc-4d55-b7e4-d5f4d16cf58e" containerID="4d846a574f926a8cb91628cc3125d07e9c8f1a3178d0302150efc81f28ba7de0" exitCode=0 Feb 03 12:30:07 crc kubenswrapper[4820]: I0203 12:30:07.087561 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gvhr2" event={"ID":"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e","Type":"ContainerDied","Data":"4d846a574f926a8cb91628cc3125d07e9c8f1a3178d0302150efc81f28ba7de0"} Feb 03 12:30:07 crc kubenswrapper[4820]: I0203 12:30:07.111351 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" event={"ID":"973038d9-ba67-4fbe-8239-ed6e47f3cf90","Type":"ContainerDied","Data":"61492a8d572ac3a64c55587a704cb8d18e9e15a34aac3f50012364068e781124"} Feb 03 12:30:07 crc kubenswrapper[4820]: I0203 12:30:07.111423 4820 scope.go:117] "RemoveContainer" containerID="7919a5c460982351bebc77e30c190b4d45bc270b95b9038a47b7ceefcc038146" Feb 03 12:30:07 crc kubenswrapper[4820]: I0203 12:30:07.111572 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:30:07 crc kubenswrapper[4820]: I0203 12:30:07.443075 4820 scope.go:117] "RemoveContainer" containerID="3d55912c4c392770ea02f041b0ae35fb92606e43d8dffbbf265234e83ae8f744" Feb 03 12:30:08 crc kubenswrapper[4820]: I0203 12:30:08.130050 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-g8wq4" event={"ID":"b594ebbd-4a60-46ca-92f6-0e4869499849","Type":"ContainerStarted","Data":"8df804dfd8e904c3d0861dd203d9e73de473141a0420f782c3aa592df09a484d"} Feb 03 12:30:08 crc kubenswrapper[4820]: I0203 12:30:08.138674 4820 generic.go:334] "Generic (PLEG): container finished" podID="2724e07a-0753-44be-93f3-4ecc9696f686" containerID="859781cf0997c5f3b1eef00a0628d35848f36283b0ddddd6a7a875ad7baa2951" exitCode=0 Feb 03 12:30:08 crc kubenswrapper[4820]: I0203 12:30:08.138783 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" event={"ID":"2724e07a-0753-44be-93f3-4ecc9696f686","Type":"ContainerDied","Data":"859781cf0997c5f3b1eef00a0628d35848f36283b0ddddd6a7a875ad7baa2951"} Feb 03 12:30:08 crc kubenswrapper[4820]: I0203 12:30:08.166088 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-g8wq4" podStartSLOduration=4.983820887 podStartE2EDuration="54.16604532s" podCreationTimestamp="2026-02-03 12:29:14 +0000 UTC" firstStartedPulling="2026-02-03 12:29:18.34584993 +0000 UTC m=+1475.868925794" lastFinishedPulling="2026-02-03 12:30:07.528074363 +0000 UTC m=+1525.051150227" observedRunningTime="2026-02-03 12:30:08.15386349 +0000 UTC m=+1525.676939374" watchObservedRunningTime="2026-02-03 12:30:08.16604532 +0000 UTC m=+1525.689121374" Feb 03 12:30:08 crc kubenswrapper[4820]: I0203 12:30:08.703388 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gvhr2" Feb 03 12:30:08 crc kubenswrapper[4820]: I0203 12:30:08.764740 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-combined-ca-bundle\") pod \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\" (UID: \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\") " Feb 03 12:30:08 crc kubenswrapper[4820]: I0203 12:30:08.765192 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5jm7\" (UniqueName: \"kubernetes.io/projected/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-kube-api-access-b5jm7\") pod \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\" (UID: \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\") " Feb 03 12:30:08 crc kubenswrapper[4820]: I0203 12:30:08.765360 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-config-data\") pod \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\" (UID: \"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e\") " Feb 03 12:30:08 crc kubenswrapper[4820]: I0203 12:30:08.772163 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-kube-api-access-b5jm7" (OuterVolumeSpecName: "kube-api-access-b5jm7") pod "66f26be7-edcc-4d55-b7e4-d5f4d16cf58e" (UID: "66f26be7-edcc-4d55-b7e4-d5f4d16cf58e"). InnerVolumeSpecName "kube-api-access-b5jm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:30:08 crc kubenswrapper[4820]: I0203 12:30:08.794065 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66f26be7-edcc-4d55-b7e4-d5f4d16cf58e" (UID: "66f26be7-edcc-4d55-b7e4-d5f4d16cf58e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:08 crc kubenswrapper[4820]: I0203 12:30:08.809965 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-config-data" (OuterVolumeSpecName: "config-data") pod "66f26be7-edcc-4d55-b7e4-d5f4d16cf58e" (UID: "66f26be7-edcc-4d55-b7e4-d5f4d16cf58e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:08 crc kubenswrapper[4820]: I0203 12:30:08.867849 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5jm7\" (UniqueName: \"kubernetes.io/projected/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-kube-api-access-b5jm7\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:08 crc kubenswrapper[4820]: I0203 12:30:08.867910 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:08 crc kubenswrapper[4820]: I0203 12:30:08.867935 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.156499 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gvhr2" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.156695 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" event={"ID":"2724e07a-0753-44be-93f3-4ecc9696f686","Type":"ContainerStarted","Data":"90d511d52ef853aaddbe3b0656e6a1bf1224f7d55ea5bd0fb10faecfa6d8996f"} Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.156736 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.156747 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gvhr2" event={"ID":"66f26be7-edcc-4d55-b7e4-d5f4d16cf58e","Type":"ContainerDied","Data":"ee4e3eff04c970a3feb2eca4abd9a77dbc8d225163fbd1c3f23370ce4dbf6a1c"} Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.156761 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee4e3eff04c970a3feb2eca4abd9a77dbc8d225163fbd1c3f23370ce4dbf6a1c" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.199782 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" podStartSLOduration=4.199761194 podStartE2EDuration="4.199761194s" podCreationTimestamp="2026-02-03 12:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:30:09.192343914 +0000 UTC m=+1526.715419808" watchObservedRunningTime="2026-02-03 12:30:09.199761194 +0000 UTC m=+1526.722837058" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.517555 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-gjsm4"] Feb 03 12:30:09 crc kubenswrapper[4820]: E0203 12:30:09.519413 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973038d9-ba67-4fbe-8239-ed6e47f3cf90" containerName="init" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.519540 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="973038d9-ba67-4fbe-8239-ed6e47f3cf90" containerName="init" Feb 03 12:30:09 crc kubenswrapper[4820]: E0203 12:30:09.519642 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f26be7-edcc-4d55-b7e4-d5f4d16cf58e" containerName="keystone-db-sync" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.519748 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f26be7-edcc-4d55-b7e4-d5f4d16cf58e" containerName="keystone-db-sync" Feb 03 12:30:09 crc kubenswrapper[4820]: E0203 12:30:09.519818 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973038d9-ba67-4fbe-8239-ed6e47f3cf90" containerName="dnsmasq-dns" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.519941 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="973038d9-ba67-4fbe-8239-ed6e47f3cf90" containerName="dnsmasq-dns" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.520226 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="973038d9-ba67-4fbe-8239-ed6e47f3cf90" containerName="dnsmasq-dns" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.520323 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f26be7-edcc-4d55-b7e4-d5f4d16cf58e" containerName="keystone-db-sync" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.521184 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.526558 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.526779 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.527806 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rvkjd" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.528002 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.536248 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.570594 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8bv2r"] Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.582149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssvrz\" (UniqueName: \"kubernetes.io/projected/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-kube-api-access-ssvrz\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.582229 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-credential-keys\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.582338 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-fernet-keys\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.582384 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-scripts\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.582434 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-combined-ca-bundle\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.582456 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-config-data\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.591317 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gjsm4"] Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.673189 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-vzphk"] Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.675038 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.685281 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-scripts\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.685353 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-combined-ca-bundle\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.685407 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-config-data\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.685468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssvrz\" (UniqueName: \"kubernetes.io/projected/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-kube-api-access-ssvrz\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.685505 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-credential-keys\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.685577 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-fernet-keys\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.697699 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-credential-keys\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.697778 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-fernet-keys\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.698709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-combined-ca-bundle\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.708606 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-config-data\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.709483 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-scripts\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.709578 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-vzphk"] Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.732505 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssvrz\" (UniqueName: \"kubernetes.io/projected/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-kube-api-access-ssvrz\") pod \"keystone-bootstrap-gjsm4\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.753007 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-b4rms"] Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.754628 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.762292 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vfvnz" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.772048 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.772913 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.789262 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-scripts\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.789679 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-etc-machine-id\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.789848 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.789980 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-config-data\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.790080 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrjrf\" (UniqueName: \"kubernetes.io/projected/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-kube-api-access-lrjrf\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.790202 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjp8h\" (UniqueName: \"kubernetes.io/projected/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-kube-api-access-fjp8h\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.790329 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-db-sync-config-data\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.790447 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.790659 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-config\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.790820 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-dns-svc\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.792711 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.792878 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-combined-ca-bundle\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.789634 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-b4rms"] Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.848079 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.897486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-scripts\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.897543 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-etc-machine-id\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.897575 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.897601 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-config-data\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.897626 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrjrf\" (UniqueName: \"kubernetes.io/projected/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-kube-api-access-lrjrf\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.897655 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjp8h\" (UniqueName: \"kubernetes.io/projected/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-kube-api-access-fjp8h\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.897694 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-db-sync-config-data\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.897716 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.897758 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-config\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.897780 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-dns-svc\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.897878 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.897961 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-combined-ca-bundle\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.905799 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.905905 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-etc-machine-id\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.907330 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-config\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.909039 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-combined-ca-bundle\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.908701 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.908107 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-dns-svc\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.909962 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.911423 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-scripts\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.916340 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-config-data\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.925960 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-db-sync-config-data\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:09 crc kubenswrapper[4820]: I0203 12:30:09.979915 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjp8h\" (UniqueName: \"kubernetes.io/projected/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-kube-api-access-fjp8h\") pod \"dnsmasq-dns-847c4cc679-vzphk\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.014610 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrjrf\" (UniqueName: \"kubernetes.io/projected/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-kube-api-access-lrjrf\") pod \"cinder-db-sync-b4rms\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.299259 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.299704 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b4rms" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.356869 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-lb2jr"] Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.358249 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75b875c965-2f4nl"] Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.359542 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.361298 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lb2jr" Feb 03 12:30:10 crc kubenswrapper[4820]: W0203 12:30:10.365129 4820 reflector.go:561] object-"openstack"/"horizon": failed to list *v1.Secret: secrets "horizon" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 03 12:30:10 crc kubenswrapper[4820]: E0203 12:30:10.365174 4820 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"horizon\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.372120 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.377215 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.394401 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lb2jr"] Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.405673 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-r8dpn" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.405910 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.406051 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.406181 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nn75w" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.429483 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75b875c965-2f4nl"] Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.506258 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-config\") pod \"neutron-db-sync-lb2jr\" (UID: \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\") " pod="openstack/neutron-db-sync-lb2jr" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.506363 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64388daf-4e84-4468-a9cb-484c0a4a8ab2-horizon-secret-key\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.506395 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6slrc\" (UniqueName: \"kubernetes.io/projected/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-kube-api-access-6slrc\") pod \"neutron-db-sync-lb2jr\" (UID: \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\") " pod="openstack/neutron-db-sync-lb2jr" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.506483 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64388daf-4e84-4468-a9cb-484c0a4a8ab2-logs\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.506568 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4mm6\" (UniqueName: \"kubernetes.io/projected/64388daf-4e84-4468-a9cb-484c0a4a8ab2-kube-api-access-p4mm6\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.514526 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-combined-ca-bundle\") pod \"neutron-db-sync-lb2jr\" (UID: \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\") " pod="openstack/neutron-db-sync-lb2jr" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.514666 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64388daf-4e84-4468-a9cb-484c0a4a8ab2-config-data\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.514861 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64388daf-4e84-4468-a9cb-484c0a4a8ab2-scripts\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.805382 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-combined-ca-bundle\") pod \"neutron-db-sync-lb2jr\" (UID: \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\") " pod="openstack/neutron-db-sync-lb2jr" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.805574 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64388daf-4e84-4468-a9cb-484c0a4a8ab2-config-data\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.805638 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64388daf-4e84-4468-a9cb-484c0a4a8ab2-scripts\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.805758 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-config\") pod \"neutron-db-sync-lb2jr\" (UID: \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\") " pod="openstack/neutron-db-sync-lb2jr" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.805785 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64388daf-4e84-4468-a9cb-484c0a4a8ab2-horizon-secret-key\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.805824 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6slrc\" (UniqueName: \"kubernetes.io/projected/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-kube-api-access-6slrc\") pod \"neutron-db-sync-lb2jr\" (UID: \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\") " pod="openstack/neutron-db-sync-lb2jr" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.805858 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64388daf-4e84-4468-a9cb-484c0a4a8ab2-logs\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.805955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4mm6\" (UniqueName: \"kubernetes.io/projected/64388daf-4e84-4468-a9cb-484c0a4a8ab2-kube-api-access-p4mm6\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.808262 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64388daf-4e84-4468-a9cb-484c0a4a8ab2-logs\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.808960 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64388daf-4e84-4468-a9cb-484c0a4a8ab2-scripts\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.809507 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64388daf-4e84-4468-a9cb-484c0a4a8ab2-config-data\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.823299 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-config\") pod \"neutron-db-sync-lb2jr\" (UID: \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\") " pod="openstack/neutron-db-sync-lb2jr" Feb 03 12:30:10 crc kubenswrapper[4820]: I0203 12:30:10.827416 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-combined-ca-bundle\") pod \"neutron-db-sync-lb2jr\" (UID: \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\") " pod="openstack/neutron-db-sync-lb2jr" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.037181 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6slrc\" (UniqueName: \"kubernetes.io/projected/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-kube-api-access-6slrc\") pod \"neutron-db-sync-lb2jr\" (UID: \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\") " pod="openstack/neutron-db-sync-lb2jr" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.216160 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4mm6\" (UniqueName: \"kubernetes.io/projected/64388daf-4e84-4468-a9cb-484c0a4a8ab2-kube-api-access-p4mm6\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.221625 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lb2jr" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.372469 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.426093 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64388daf-4e84-4468-a9cb-484c0a4a8ab2-horizon-secret-key\") pod \"horizon-75b875c965-2f4nl\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.446667 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.521745 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" podUID="2724e07a-0753-44be-93f3-4ecc9696f686" containerName="dnsmasq-dns" containerID="cri-o://90d511d52ef853aaddbe3b0656e6a1bf1224f7d55ea5bd0fb10faecfa6d8996f" gracePeriod=10 Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.822582 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.822603 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.823498 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-9csj4"] Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.825507 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-vzphk"] Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.825632 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9csj4"] Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.825837 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.827264 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.870159 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.870843 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.871255 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rp7hq" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.872039 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.873318 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.873517 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.873684 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-twgm4" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.925533 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.925590 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xfqb\" (UniqueName: \"kubernetes.io/projected/470b8f27-2959-4890-aed3-361530b83b73-kube-api-access-9xfqb\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.925636 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.925662 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.925677 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-logs\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.925710 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.925772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtfmw\" (UniqueName: \"kubernetes.io/projected/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-kube-api-access-jtfmw\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.925820 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-config-data\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.925846 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-scripts\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.925869 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-combined-ca-bundle\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.925951 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.925991 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:11 crc kubenswrapper[4820]: I0203 12:30:11.926042 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/470b8f27-2959-4890-aed3-361530b83b73-logs\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.029434 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.029756 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.029807 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/470b8f27-2959-4890-aed3-361530b83b73-logs\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.029834 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.029854 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xfqb\" (UniqueName: \"kubernetes.io/projected/470b8f27-2959-4890-aed3-361530b83b73-kube-api-access-9xfqb\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.029907 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.029928 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.029958 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-logs\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.029975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.030023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtfmw\" (UniqueName: \"kubernetes.io/projected/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-kube-api-access-jtfmw\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.030060 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-config-data\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.030075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-scripts\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.030090 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-combined-ca-bundle\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.030638 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.032480 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/470b8f27-2959-4890-aed3-361530b83b73-logs\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.043436 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-logs\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.045340 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.047013 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-combined-ca-bundle\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.056058 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d5vcp"] Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.099653 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.056391 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.069881 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.072554 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.072864 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-scripts\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.084739 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-config-data\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.094543 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.095832 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: E0203 12:30:12.068625 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data glance httpd-run kube-api-access-jtfmw logs public-tls-certs scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-external-api-0" podUID="bd8c517f-dae9-49ac-ad81-ef5659ce80c9" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.187554 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xfqb\" (UniqueName: \"kubernetes.io/projected/470b8f27-2959-4890-aed3-361530b83b73-kube-api-access-9xfqb\") pod \"placement-db-sync-9csj4\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.258072 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.261733 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.266867 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.269236 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtfmw\" (UniqueName: \"kubernetes.io/projected/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-kube-api-access-jtfmw\") pod \"glance-default-external-api-0\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.273198 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.273528 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.298461 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xsjm7"] Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.308302 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xsjm7" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.309573 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9csj4" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.321611 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.321855 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kddvd" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357022 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357069 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42f59b9-8816-4e43-869b-1c6cd36b4034-logs\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357115 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357224 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357254 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357292 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357345 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357375 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357412 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357458 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp2w6\" (UniqueName: \"kubernetes.io/projected/e42f59b9-8816-4e43-869b-1c6cd36b4034-kube-api-access-fp2w6\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357507 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-config\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357555 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357578 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e42f59b9-8816-4e43-869b-1c6cd36b4034-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357628 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k59s\" (UniqueName: \"kubernetes.io/projected/5f5c5f87-b592-4f5d-86bc-3069985ae61a-kube-api-access-5k59s\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.357831 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d5vcp"] Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.513846 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.517337 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k59s\" (UniqueName: \"kubernetes.io/projected/5f5c5f87-b592-4f5d-86bc-3069985ae61a-kube-api-access-5k59s\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.517433 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4116aff-b63f-47f1-b4bd-5bde84226d87-combined-ca-bundle\") pod \"barbican-db-sync-xsjm7\" (UID: \"f4116aff-b63f-47f1-b4bd-5bde84226d87\") " pod="openstack/barbican-db-sync-xsjm7" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.517491 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhp8l\" (UniqueName: \"kubernetes.io/projected/f4116aff-b63f-47f1-b4bd-5bde84226d87-kube-api-access-hhp8l\") pod \"barbican-db-sync-xsjm7\" (UID: \"f4116aff-b63f-47f1-b4bd-5bde84226d87\") " pod="openstack/barbican-db-sync-xsjm7" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.517531 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4116aff-b63f-47f1-b4bd-5bde84226d87-db-sync-config-data\") pod \"barbican-db-sync-xsjm7\" (UID: \"f4116aff-b63f-47f1-b4bd-5bde84226d87\") " pod="openstack/barbican-db-sync-xsjm7" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.517591 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.517615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42f59b9-8816-4e43-869b-1c6cd36b4034-logs\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.517685 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.517841 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.517909 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.517936 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.518006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.518029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.518080 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.518123 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp2w6\" (UniqueName: \"kubernetes.io/projected/e42f59b9-8816-4e43-869b-1c6cd36b4034-kube-api-access-fp2w6\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.518197 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-config\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.518273 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.518322 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e42f59b9-8816-4e43-869b-1c6cd36b4034-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.519620 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.520720 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42f59b9-8816-4e43-869b-1c6cd36b4034-logs\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.521523 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e42f59b9-8816-4e43-869b-1c6cd36b4034-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.531775 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-config\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.535553 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.537506 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.561310 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xsjm7"] Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.563654 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.563705 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.564334 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.570192 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.581270 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.587328 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k59s\" (UniqueName: \"kubernetes.io/projected/5f5c5f87-b592-4f5d-86bc-3069985ae61a-kube-api-access-5k59s\") pod \"dnsmasq-dns-785d8bcb8c-d5vcp\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.589792 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp2w6\" (UniqueName: \"kubernetes.io/projected/e42f59b9-8816-4e43-869b-1c6cd36b4034-kube-api-access-fp2w6\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.590253 4820 generic.go:334] "Generic (PLEG): container finished" podID="2724e07a-0753-44be-93f3-4ecc9696f686" containerID="90d511d52ef853aaddbe3b0656e6a1bf1224f7d55ea5bd0fb10faecfa6d8996f" exitCode=0 Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.590397 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" event={"ID":"2724e07a-0753-44be-93f3-4ecc9696f686","Type":"ContainerDied","Data":"90d511d52ef853aaddbe3b0656e6a1bf1224f7d55ea5bd0fb10faecfa6d8996f"} Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.597796 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.601044 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gjsm4" event={"ID":"2c88a5c9-941c-40ef-a3c8-7ff304ea0517","Type":"ContainerStarted","Data":"55658c94bb3910517c0386f573788ae49cf5f61ff7f3f996ad20f8fe0100e516"} Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.605485 4820 generic.go:334] "Generic (PLEG): container finished" podID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerID="203e10bb8ec4cd8d38391a67e6322fed75f528ffc84047efd3a54eb07c57c7ab" exitCode=0 Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.605593 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.606196 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0658b201-7c4e-4d71-ba2d-c2cb5dee1553","Type":"ContainerDied","Data":"203e10bb8ec4cd8d38391a67e6322fed75f528ffc84047efd3a54eb07c57c7ab"} Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.621645 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gjsm4"] Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.623699 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4116aff-b63f-47f1-b4bd-5bde84226d87-combined-ca-bundle\") pod \"barbican-db-sync-xsjm7\" (UID: \"f4116aff-b63f-47f1-b4bd-5bde84226d87\") " pod="openstack/barbican-db-sync-xsjm7" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.623856 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhp8l\" (UniqueName: \"kubernetes.io/projected/f4116aff-b63f-47f1-b4bd-5bde84226d87-kube-api-access-hhp8l\") pod \"barbican-db-sync-xsjm7\" (UID: \"f4116aff-b63f-47f1-b4bd-5bde84226d87\") " pod="openstack/barbican-db-sync-xsjm7" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.623955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4116aff-b63f-47f1-b4bd-5bde84226d87-db-sync-config-data\") pod \"barbican-db-sync-xsjm7\" (UID: \"f4116aff-b63f-47f1-b4bd-5bde84226d87\") " pod="openstack/barbican-db-sync-xsjm7" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.644486 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c85bb46f7-qz2f2"] Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.656285 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.656793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhp8l\" (UniqueName: \"kubernetes.io/projected/f4116aff-b63f-47f1-b4bd-5bde84226d87-kube-api-access-hhp8l\") pod \"barbican-db-sync-xsjm7\" (UID: \"f4116aff-b63f-47f1-b4bd-5bde84226d87\") " pod="openstack/barbican-db-sync-xsjm7" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.662021 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4116aff-b63f-47f1-b4bd-5bde84226d87-combined-ca-bundle\") pod \"barbican-db-sync-xsjm7\" (UID: \"f4116aff-b63f-47f1-b4bd-5bde84226d87\") " pod="openstack/barbican-db-sync-xsjm7" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.674967 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4116aff-b63f-47f1-b4bd-5bde84226d87-db-sync-config-data\") pod \"barbican-db-sync-xsjm7\" (UID: \"f4116aff-b63f-47f1-b4bd-5bde84226d87\") " pod="openstack/barbican-db-sync-xsjm7" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.681131 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.705631 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.706460 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c85bb46f7-qz2f2"] Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.739870 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xsjm7" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.754103 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.756572 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.767737 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.773690 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.777729 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.821830 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.841212 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb707be0-2a48-4886-894b-cc7554a1be6f-logs\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.841368 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb707be0-2a48-4886-894b-cc7554a1be6f-scripts\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.841397 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb707be0-2a48-4886-894b-cc7554a1be6f-config-data\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.841458 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bb707be0-2a48-4886-894b-cc7554a1be6f-horizon-secret-key\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.841551 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hmfz\" (UniqueName: \"kubernetes.io/projected/bb707be0-2a48-4886-894b-cc7554a1be6f-kube-api-access-2hmfz\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.862950 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:30:12 crc kubenswrapper[4820]: W0203 12:30:12.896980 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d60ff7f_22cd_4518_a7d4_77dd49b9df10.slice/crio-b74fea519164049da04c9dbe373c4d12068ca1d54c35d5ddca66edd43f9d5093 WatchSource:0}: Error finding container b74fea519164049da04c9dbe373c4d12068ca1d54c35d5ddca66edd43f9d5093: Status 404 returned error can't find the container with id b74fea519164049da04c9dbe373c4d12068ca1d54c35d5ddca66edd43f9d5093 Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.928514 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.950627 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-scripts\") pod \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.951964 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-config-data\") pod \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.955994 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-combined-ca-bundle\") pod \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.956736 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-httpd-run\") pod \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.958666 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-logs\") pod \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.958790 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.959157 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-public-tls-certs\") pod \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.959349 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtfmw\" (UniqueName: \"kubernetes.io/projected/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-kube-api-access-jtfmw\") pod \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\" (UID: \"bd8c517f-dae9-49ac-ad81-ef5659ce80c9\") " Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.961117 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-config-data" (OuterVolumeSpecName: "config-data") pod "bd8c517f-dae9-49ac-ad81-ef5659ce80c9" (UID: "bd8c517f-dae9-49ac-ad81-ef5659ce80c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.961617 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-logs" (OuterVolumeSpecName: "logs") pod "bd8c517f-dae9-49ac-ad81-ef5659ce80c9" (UID: "bd8c517f-dae9-49ac-ad81-ef5659ce80c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.962122 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bd8c517f-dae9-49ac-ad81-ef5659ce80c9" (UID: "bd8c517f-dae9-49ac-ad81-ef5659ce80c9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.965606 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-scripts" (OuterVolumeSpecName: "scripts") pod "bd8c517f-dae9-49ac-ad81-ef5659ce80c9" (UID: "bd8c517f-dae9-49ac-ad81-ef5659ce80c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.967555 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb707be0-2a48-4886-894b-cc7554a1be6f-logs\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.966646 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb707be0-2a48-4886-894b-cc7554a1be6f-logs\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.969882 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "bd8c517f-dae9-49ac-ad81-ef5659ce80c9" (UID: "bd8c517f-dae9-49ac-ad81-ef5659ce80c9"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.971075 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd8c517f-dae9-49ac-ad81-ef5659ce80c9" (UID: "bd8c517f-dae9-49ac-ad81-ef5659ce80c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.974465 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb707be0-2a48-4886-894b-cc7554a1be6f-scripts\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.974591 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb707be0-2a48-4886-894b-cc7554a1be6f-scripts\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.974719 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.974927 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb707be0-2a48-4886-894b-cc7554a1be6f-config-data\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.975083 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bb707be0-2a48-4886-894b-cc7554a1be6f-horizon-secret-key\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.975204 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2cc54f2-167c-4c79-b616-2e1cd122fed2-log-httpd\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.975474 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hmfz\" (UniqueName: \"kubernetes.io/projected/bb707be0-2a48-4886-894b-cc7554a1be6f-kube-api-access-2hmfz\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.975662 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.975843 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-config-data\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.976102 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2cc54f2-167c-4c79-b616-2e1cd122fed2-run-httpd\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.976260 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-scripts\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.976400 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fftdb\" (UniqueName: \"kubernetes.io/projected/e2cc54f2-167c-4c79-b616-2e1cd122fed2-kube-api-access-fftdb\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.978805 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.978975 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.979088 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.979206 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.979318 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.979411 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.977773 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb707be0-2a48-4886-894b-cc7554a1be6f-config-data\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.982174 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-kube-api-access-jtfmw" (OuterVolumeSpecName: "kube-api-access-jtfmw") pod "bd8c517f-dae9-49ac-ad81-ef5659ce80c9" (UID: "bd8c517f-dae9-49ac-ad81-ef5659ce80c9"). InnerVolumeSpecName "kube-api-access-jtfmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.984813 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bd8c517f-dae9-49ac-ad81-ef5659ce80c9" (UID: "bd8c517f-dae9-49ac-ad81-ef5659ce80c9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:12 crc kubenswrapper[4820]: I0203 12:30:12.995310 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bb707be0-2a48-4886-894b-cc7554a1be6f-horizon-secret-key\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.008559 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hmfz\" (UniqueName: \"kubernetes.io/projected/bb707be0-2a48-4886-894b-cc7554a1be6f-kube-api-access-2hmfz\") pod \"horizon-7c85bb46f7-qz2f2\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.016050 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-vzphk"] Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.174604 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.179944 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-scripts\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.179992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fftdb\" (UniqueName: \"kubernetes.io/projected/e2cc54f2-167c-4c79-b616-2e1cd122fed2-kube-api-access-fftdb\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.180065 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.180101 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2cc54f2-167c-4c79-b616-2e1cd122fed2-log-httpd\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.180167 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.180217 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-config-data\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.180264 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2cc54f2-167c-4c79-b616-2e1cd122fed2-run-httpd\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.182127 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.188480 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2cc54f2-167c-4c79-b616-2e1cd122fed2-run-httpd\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.193461 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtfmw\" (UniqueName: \"kubernetes.io/projected/bd8c517f-dae9-49ac-ad81-ef5659ce80c9-kube-api-access-jtfmw\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.193598 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2cc54f2-167c-4c79-b616-2e1cd122fed2-log-httpd\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.193934 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.196332 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-config-data\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.201696 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.202094 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-scripts\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.217345 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.218853 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fftdb\" (UniqueName: \"kubernetes.io/projected/e2cc54f2-167c-4c79-b616-2e1cd122fed2-kube-api-access-fftdb\") pod \"ceilometer-0\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.296412 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.333403 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-b4rms"] Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.439725 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.692253 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.707573 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gjsm4" event={"ID":"2c88a5c9-941c-40ef-a3c8-7ff304ea0517","Type":"ContainerStarted","Data":"a1ff861ad6ee50e7673d412707100050bf5dc95a1a5eef2f3c9d1d19ec15a594"} Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.717245 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0658b201-7c4e-4d71-ba2d-c2cb5dee1553","Type":"ContainerStarted","Data":"1b83b16b953f35ac683a42f9df773b773b85c664aa19af779b648cf193bddfb5"} Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.727309 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-vzphk" event={"ID":"4d60ff7f-22cd-4518-a7d4-77dd49b9df10","Type":"ContainerStarted","Data":"b74fea519164049da04c9dbe373c4d12068ca1d54c35d5ddca66edd43f9d5093"} Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.730028 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-ovsdbserver-nb\") pod \"2724e07a-0753-44be-93f3-4ecc9696f686\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.730087 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-dns-svc\") pod \"2724e07a-0753-44be-93f3-4ecc9696f686\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.730266 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-dns-swift-storage-0\") pod \"2724e07a-0753-44be-93f3-4ecc9696f686\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.730340 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-ovsdbserver-sb\") pod \"2724e07a-0753-44be-93f3-4ecc9696f686\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.730416 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jqmm\" (UniqueName: \"kubernetes.io/projected/2724e07a-0753-44be-93f3-4ecc9696f686-kube-api-access-6jqmm\") pod \"2724e07a-0753-44be-93f3-4ecc9696f686\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.730524 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-config\") pod \"2724e07a-0753-44be-93f3-4ecc9696f686\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.754197 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2724e07a-0753-44be-93f3-4ecc9696f686-kube-api-access-6jqmm" (OuterVolumeSpecName: "kube-api-access-6jqmm") pod "2724e07a-0753-44be-93f3-4ecc9696f686" (UID: "2724e07a-0753-44be-93f3-4ecc9696f686"). InnerVolumeSpecName "kube-api-access-6jqmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.758163 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" event={"ID":"2724e07a-0753-44be-93f3-4ecc9696f686","Type":"ContainerDied","Data":"0c0e5f6300108ee468115f1b1e167c51b0ccc2933f6eea17174a88e222050ae7"} Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.758217 4820 scope.go:117] "RemoveContainer" containerID="90d511d52ef853aaddbe3b0656e6a1bf1224f7d55ea5bd0fb10faecfa6d8996f" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.758356 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8bv2r" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.759817 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lb2jr"] Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.792176 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.792589 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b4rms" event={"ID":"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0","Type":"ContainerStarted","Data":"2be3fe58abe533c005d039e6eac08a044a0d5ed04a5f4dbddca03cfcbfda2436"} Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.820531 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9csj4"] Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.836492 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jqmm\" (UniqueName: \"kubernetes.io/projected/2724e07a-0753-44be-93f3-4ecc9696f686-kube-api-access-6jqmm\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.894759 4820 scope.go:117] "RemoveContainer" containerID="859781cf0997c5f3b1eef00a0628d35848f36283b0ddddd6a7a875ad7baa2951" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.907591 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2724e07a-0753-44be-93f3-4ecc9696f686" (UID: "2724e07a-0753-44be-93f3-4ecc9696f686"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.935008 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-gjsm4" podStartSLOduration=4.93482109 podStartE2EDuration="4.93482109s" podCreationTimestamp="2026-02-03 12:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:30:13.865613431 +0000 UTC m=+1531.388689295" watchObservedRunningTime="2026-02-03 12:30:13.93482109 +0000 UTC m=+1531.457896954" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.940782 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:13 crc kubenswrapper[4820]: I0203 12:30:13.978440 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75b875c965-2f4nl"] Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.057250 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.090788 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.101697 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2724e07a-0753-44be-93f3-4ecc9696f686" (UID: "2724e07a-0753-44be-93f3-4ecc9696f686"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.148314 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-config" (OuterVolumeSpecName: "config") pod "2724e07a-0753-44be-93f3-4ecc9696f686" (UID: "2724e07a-0753-44be-93f3-4ecc9696f686"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.159816 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.162407 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2724e07a-0753-44be-93f3-4ecc9696f686" (UID: "2724e07a-0753-44be-93f3-4ecc9696f686"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:14 crc kubenswrapper[4820]: E0203 12:30:14.165558 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2724e07a-0753-44be-93f3-4ecc9696f686" containerName="dnsmasq-dns" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.165600 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2724e07a-0753-44be-93f3-4ecc9696f686" containerName="dnsmasq-dns" Feb 03 12:30:14 crc kubenswrapper[4820]: E0203 12:30:14.165633 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2724e07a-0753-44be-93f3-4ecc9696f686" containerName="init" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.165643 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2724e07a-0753-44be-93f3-4ecc9696f686" containerName="init" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.165964 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2724e07a-0753-44be-93f3-4ecc9696f686" containerName="dnsmasq-dns" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.167412 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.168613 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-config\") pod \"2724e07a-0753-44be-93f3-4ecc9696f686\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.168793 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-ovsdbserver-sb\") pod \"2724e07a-0753-44be-93f3-4ecc9696f686\" (UID: \"2724e07a-0753-44be-93f3-4ecc9696f686\") " Feb 03 12:30:14 crc kubenswrapper[4820]: W0203 12:30:14.168913 4820 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/2724e07a-0753-44be-93f3-4ecc9696f686/volumes/kubernetes.io~configmap/config Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.168952 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-config" (OuterVolumeSpecName: "config") pod "2724e07a-0753-44be-93f3-4ecc9696f686" (UID: "2724e07a-0753-44be-93f3-4ecc9696f686"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:14 crc kubenswrapper[4820]: W0203 12:30:14.169028 4820 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/2724e07a-0753-44be-93f3-4ecc9696f686/volumes/kubernetes.io~configmap/ovsdbserver-sb Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.169040 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2724e07a-0753-44be-93f3-4ecc9696f686" (UID: "2724e07a-0753-44be-93f3-4ecc9696f686"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.173415 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.173644 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.179346 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.179384 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.179397 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.190206 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.231011 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2724e07a-0753-44be-93f3-4ecc9696f686" (UID: "2724e07a-0753-44be-93f3-4ecc9696f686"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.306096 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2724e07a-0753-44be-93f3-4ecc9696f686-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.320480 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.374322 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xsjm7"] Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.415672 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.415723 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.415823 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.415908 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.415961 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c5159f2-a29f-4730-8198-8a866761947c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.415980 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czrqz\" (UniqueName: \"kubernetes.io/projected/1c5159f2-a29f-4730-8198-8a866761947c-kube-api-access-czrqz\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.416012 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.416306 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5159f2-a29f-4730-8198-8a866761947c-logs\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.527193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.527900 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czrqz\" (UniqueName: \"kubernetes.io/projected/1c5159f2-a29f-4730-8198-8a866761947c-kube-api-access-czrqz\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.528024 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c5159f2-a29f-4730-8198-8a866761947c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.528119 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.528403 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5159f2-a29f-4730-8198-8a866761947c-logs\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.528706 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.528806 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.528950 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.529132 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5159f2-a29f-4730-8198-8a866761947c-logs\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.530349 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.532344 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c5159f2-a29f-4730-8198-8a866761947c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.572410 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d5vcp"] Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.591618 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.591631 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.610069 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.610927 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czrqz\" (UniqueName: \"kubernetes.io/projected/1c5159f2-a29f-4730-8198-8a866761947c-kube-api-access-czrqz\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.610989 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.675837 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: W0203 12:30:14.718759 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2cc54f2_167c_4c79_b616_2e1cd122fed2.slice/crio-2323248a6e2e02ad0a646382f15866bc79731b9692262e9b3d051f749519af92 WatchSource:0}: Error finding container 2323248a6e2e02ad0a646382f15866bc79731b9692262e9b3d051f749519af92: Status 404 returned error can't find the container with id 2323248a6e2e02ad0a646382f15866bc79731b9692262e9b3d051f749519af92 Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.721172 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8bv2r"] Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.771076 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.790333 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8bv2r"] Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.869789 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.906693 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c85bb46f7-qz2f2"] Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.963285 4820 generic.go:334] "Generic (PLEG): container finished" podID="4d60ff7f-22cd-4518-a7d4-77dd49b9df10" containerID="14eb35352d3dae5a07d5c40a5167a41c7163d56b04f0b090afc1bc598ff53cb7" exitCode=0 Feb 03 12:30:14 crc kubenswrapper[4820]: I0203 12:30:14.973993 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-vzphk" event={"ID":"4d60ff7f-22cd-4518-a7d4-77dd49b9df10","Type":"ContainerDied","Data":"14eb35352d3dae5a07d5c40a5167a41c7163d56b04f0b090afc1bc598ff53cb7"} Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.041852 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2cc54f2-167c-4c79-b616-2e1cd122fed2","Type":"ContainerStarted","Data":"2323248a6e2e02ad0a646382f15866bc79731b9692262e9b3d051f749519af92"} Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.064120 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9csj4" event={"ID":"470b8f27-2959-4890-aed3-361530b83b73","Type":"ContainerStarted","Data":"16d64648b2d81263c6456d9ab40e60ab6e37bae923c46475fe459514befa2a2c"} Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.085093 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xsjm7" event={"ID":"f4116aff-b63f-47f1-b4bd-5bde84226d87","Type":"ContainerStarted","Data":"f2d80ef5637a1182c47131a8136d097402b7f9eefe9a06e2288396e47c61dd0e"} Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.129584 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e42f59b9-8816-4e43-869b-1c6cd36b4034","Type":"ContainerStarted","Data":"353cf6b92bb5a5c5a77413dfdb49f2e679a6ea11e7cb17adc1a68cf09ac13839"} Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.136648 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" event={"ID":"5f5c5f87-b592-4f5d-86bc-3069985ae61a","Type":"ContainerStarted","Data":"750c88b8ecd7a8dd72d10ae755ea665ce4fde9598e9c1761de2d40ec2ebcb182"} Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.196917 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2724e07a-0753-44be-93f3-4ecc9696f686" path="/var/lib/kubelet/pods/2724e07a-0753-44be-93f3-4ecc9696f686/volumes" Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.212535 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-lb2jr" podStartSLOduration=6.212513883 podStartE2EDuration="6.212513883s" podCreationTimestamp="2026-02-03 12:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:30:15.206305495 +0000 UTC m=+1532.729381359" watchObservedRunningTime="2026-02-03 12:30:15.212513883 +0000 UTC m=+1532.735589747" Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.213472 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd8c517f-dae9-49ac-ad81-ef5659ce80c9" path="/var/lib/kubelet/pods/bd8c517f-dae9-49ac-ad81-ef5659ce80c9/volumes" Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.214182 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lb2jr" event={"ID":"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb","Type":"ContainerStarted","Data":"61b2dcfae63a3cff65e172151526b84f759051337294f6749e1fe4da603c1bd3"} Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.214236 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75b875c965-2f4nl" event={"ID":"64388daf-4e84-4468-a9cb-484c0a4a8ab2","Type":"ContainerStarted","Data":"8da13125181d670cb1a47a12ced06442eeb20589d592f55f732d38b1dce6d0ae"} Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.680903 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.773668 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-ovsdbserver-sb\") pod \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.773743 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-config\") pod \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.773809 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-dns-swift-storage-0\") pod \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.773958 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-dns-svc\") pod \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.773999 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-ovsdbserver-nb\") pod \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.774030 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjp8h\" (UniqueName: \"kubernetes.io/projected/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-kube-api-access-fjp8h\") pod \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\" (UID: \"4d60ff7f-22cd-4518-a7d4-77dd49b9df10\") " Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.854677 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.866805 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-kube-api-access-fjp8h" (OuterVolumeSpecName: "kube-api-access-fjp8h") pod "4d60ff7f-22cd-4518-a7d4-77dd49b9df10" (UID: "4d60ff7f-22cd-4518-a7d4-77dd49b9df10"). InnerVolumeSpecName "kube-api-access-fjp8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.876628 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjp8h\" (UniqueName: \"kubernetes.io/projected/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-kube-api-access-fjp8h\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.919564 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75b875c965-2f4nl"] Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.947692 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65c5474c77-d2q2r"] Feb 03 12:30:15 crc kubenswrapper[4820]: E0203 12:30:15.948362 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d60ff7f-22cd-4518-a7d4-77dd49b9df10" containerName="init" Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.948627 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d60ff7f-22cd-4518-a7d4-77dd49b9df10" containerName="init" Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.949299 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d60ff7f-22cd-4518-a7d4-77dd49b9df10" containerName="init" Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.973498 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65c5474c77-d2q2r"] Feb 03 12:30:15 crc kubenswrapper[4820]: I0203 12:30:15.975956 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.005972 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.016178 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.087602 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4687f816-7024-4244-b049-d94441b1cef0-logs\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.087772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4687f816-7024-4244-b049-d94441b1cef0-config-data\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.087942 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj2nw\" (UniqueName: \"kubernetes.io/projected/4687f816-7024-4244-b049-d94441b1cef0-kube-api-access-wj2nw\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.087980 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4687f816-7024-4244-b049-d94441b1cef0-scripts\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.088860 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4687f816-7024-4244-b049-d94441b1cef0-horizon-secret-key\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.152715 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4d60ff7f-22cd-4518-a7d4-77dd49b9df10" (UID: "4d60ff7f-22cd-4518-a7d4-77dd49b9df10"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.192562 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj2nw\" (UniqueName: \"kubernetes.io/projected/4687f816-7024-4244-b049-d94441b1cef0-kube-api-access-wj2nw\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.192618 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4687f816-7024-4244-b049-d94441b1cef0-scripts\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.192710 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4687f816-7024-4244-b049-d94441b1cef0-horizon-secret-key\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.192744 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4687f816-7024-4244-b049-d94441b1cef0-logs\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.192799 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4687f816-7024-4244-b049-d94441b1cef0-config-data\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.192864 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.194422 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4687f816-7024-4244-b049-d94441b1cef0-logs\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.194485 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4687f816-7024-4244-b049-d94441b1cef0-scripts\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.194581 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4687f816-7024-4244-b049-d94441b1cef0-config-data\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.207912 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-vzphk" event={"ID":"4d60ff7f-22cd-4518-a7d4-77dd49b9df10","Type":"ContainerDied","Data":"b74fea519164049da04c9dbe373c4d12068ca1d54c35d5ddca66edd43f9d5093"} Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.207999 4820 scope.go:117] "RemoveContainer" containerID="14eb35352d3dae5a07d5c40a5167a41c7163d56b04f0b090afc1bc598ff53cb7" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.208216 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-vzphk" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.217738 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj2nw\" (UniqueName: \"kubernetes.io/projected/4687f816-7024-4244-b049-d94441b1cef0-kube-api-access-wj2nw\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.225296 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lb2jr" event={"ID":"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb","Type":"ContainerStarted","Data":"34d519c618eba6d21f8bdc59e5fbc6e2f30a0da9b52b4c66f1dcbedd3137aa91"} Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.249004 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c85bb46f7-qz2f2" event={"ID":"bb707be0-2a48-4886-894b-cc7554a1be6f","Type":"ContainerStarted","Data":"e2561cf8b09da23cdd2bdea3431515c80c9178800bad09a88ad729d6fabe8c7a"} Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.254270 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4687f816-7024-4244-b049-d94441b1cef0-horizon-secret-key\") pod \"horizon-65c5474c77-d2q2r\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.332545 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.334334 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-config" (OuterVolumeSpecName: "config") pod "4d60ff7f-22cd-4518-a7d4-77dd49b9df10" (UID: "4d60ff7f-22cd-4518-a7d4-77dd49b9df10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.353694 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.393995 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d60ff7f-22cd-4518-a7d4-77dd49b9df10" (UID: "4d60ff7f-22cd-4518-a7d4-77dd49b9df10"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.398124 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.398164 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.482683 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4d60ff7f-22cd-4518-a7d4-77dd49b9df10" (UID: "4d60ff7f-22cd-4518-a7d4-77dd49b9df10"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:16 crc kubenswrapper[4820]: I0203 12:30:16.499755 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:16 crc kubenswrapper[4820]: W0203 12:30:16.519786 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c5159f2_a29f_4730_8198_8a866761947c.slice/crio-94df5edb9b14d259fdcd9f75c243a760bb2d6db14bc4a73a9976025723a3e5cd WatchSource:0}: Error finding container 94df5edb9b14d259fdcd9f75c243a760bb2d6db14bc4a73a9976025723a3e5cd: Status 404 returned error can't find the container with id 94df5edb9b14d259fdcd9f75c243a760bb2d6db14bc4a73a9976025723a3e5cd Feb 03 12:30:17 crc kubenswrapper[4820]: W0203 12:30:17.216694 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4687f816_7024_4244_b049_d94441b1cef0.slice/crio-2d764cdc247f5ac76fe7a9e979ffad16f363e157daa1f91ed7d8763300cb0a66 WatchSource:0}: Error finding container 2d764cdc247f5ac76fe7a9e979ffad16f363e157daa1f91ed7d8763300cb0a66: Status 404 returned error can't find the container with id 2d764cdc247f5ac76fe7a9e979ffad16f363e157daa1f91ed7d8763300cb0a66 Feb 03 12:30:17 crc kubenswrapper[4820]: I0203 12:30:17.273381 4820 generic.go:334] "Generic (PLEG): container finished" podID="b594ebbd-4a60-46ca-92f6-0e4869499849" containerID="8df804dfd8e904c3d0861dd203d9e73de473141a0420f782c3aa592df09a484d" exitCode=0 Feb 03 12:30:17 crc kubenswrapper[4820]: I0203 12:30:17.288339 4820 generic.go:334] "Generic (PLEG): container finished" podID="5f5c5f87-b592-4f5d-86bc-3069985ae61a" containerID="c9e01280b550b17d3a841e1154b3fff577135e72b1cb6b49445fd84ecc47fac5" exitCode=0 Feb 03 12:30:17 crc kubenswrapper[4820]: I0203 12:30:17.709260 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4d60ff7f-22cd-4518-a7d4-77dd49b9df10" (UID: "4d60ff7f-22cd-4518-a7d4-77dd49b9df10"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:17 crc kubenswrapper[4820]: I0203 12:30:17.738465 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d60ff7f-22cd-4518-a7d4-77dd49b9df10-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:17 crc kubenswrapper[4820]: I0203 12:30:17.939122 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65c5474c77-d2q2r"] Feb 03 12:30:17 crc kubenswrapper[4820]: I0203 12:30:17.939161 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-g8wq4" event={"ID":"b594ebbd-4a60-46ca-92f6-0e4869499849","Type":"ContainerDied","Data":"8df804dfd8e904c3d0861dd203d9e73de473141a0420f782c3aa592df09a484d"} Feb 03 12:30:17 crc kubenswrapper[4820]: I0203 12:30:17.939198 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" event={"ID":"5f5c5f87-b592-4f5d-86bc-3069985ae61a","Type":"ContainerDied","Data":"c9e01280b550b17d3a841e1154b3fff577135e72b1cb6b49445fd84ecc47fac5"} Feb 03 12:30:17 crc kubenswrapper[4820]: I0203 12:30:17.939212 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65c5474c77-d2q2r" event={"ID":"4687f816-7024-4244-b049-d94441b1cef0","Type":"ContainerStarted","Data":"2d764cdc247f5ac76fe7a9e979ffad16f363e157daa1f91ed7d8763300cb0a66"} Feb 03 12:30:17 crc kubenswrapper[4820]: I0203 12:30:17.939222 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c5159f2-a29f-4730-8198-8a866761947c","Type":"ContainerStarted","Data":"94df5edb9b14d259fdcd9f75c243a760bb2d6db14bc4a73a9976025723a3e5cd"} Feb 03 12:30:17 crc kubenswrapper[4820]: I0203 12:30:17.939234 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e42f59b9-8816-4e43-869b-1c6cd36b4034","Type":"ContainerStarted","Data":"e60cf3d68962fbcc86c06d09f1fe6ddc05184f26d23b7d38591959cd6a7268a2"} Feb 03 12:30:18 crc kubenswrapper[4820]: I0203 12:30:18.090344 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-vzphk"] Feb 03 12:30:18 crc kubenswrapper[4820]: I0203 12:30:18.114054 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-vzphk"] Feb 03 12:30:18 crc kubenswrapper[4820]: I0203 12:30:18.419554 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0658b201-7c4e-4d71-ba2d-c2cb5dee1553","Type":"ContainerStarted","Data":"371f09d729ac39ba178842592d0e3292231fdc93369935a7b6ea07621067ede6"} Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.175700 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d60ff7f-22cd-4518-a7d4-77dd49b9df10" path="/var/lib/kubelet/pods/4d60ff7f-22cd-4518-a7d4-77dd49b9df10/volumes" Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.270786 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.325743 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2zpd\" (UniqueName: \"kubernetes.io/projected/b594ebbd-4a60-46ca-92f6-0e4869499849-kube-api-access-r2zpd\") pod \"b594ebbd-4a60-46ca-92f6-0e4869499849\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.325832 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-combined-ca-bundle\") pod \"b594ebbd-4a60-46ca-92f6-0e4869499849\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.326003 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-config-data\") pod \"b594ebbd-4a60-46ca-92f6-0e4869499849\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.326189 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-db-sync-config-data\") pod \"b594ebbd-4a60-46ca-92f6-0e4869499849\" (UID: \"b594ebbd-4a60-46ca-92f6-0e4869499849\") " Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.335437 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b594ebbd-4a60-46ca-92f6-0e4869499849-kube-api-access-r2zpd" (OuterVolumeSpecName: "kube-api-access-r2zpd") pod "b594ebbd-4a60-46ca-92f6-0e4869499849" (UID: "b594ebbd-4a60-46ca-92f6-0e4869499849"). InnerVolumeSpecName "kube-api-access-r2zpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.345066 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b594ebbd-4a60-46ca-92f6-0e4869499849" (UID: "b594ebbd-4a60-46ca-92f6-0e4869499849"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.399491 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b594ebbd-4a60-46ca-92f6-0e4869499849" (UID: "b594ebbd-4a60-46ca-92f6-0e4869499849"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.431067 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2zpd\" (UniqueName: \"kubernetes.io/projected/b594ebbd-4a60-46ca-92f6-0e4869499849-kube-api-access-r2zpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.432226 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.432253 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.457007 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-config-data" (OuterVolumeSpecName: "config-data") pod "b594ebbd-4a60-46ca-92f6-0e4869499849" (UID: "b594ebbd-4a60-46ca-92f6-0e4869499849"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.459151 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c5159f2-a29f-4730-8198-8a866761947c","Type":"ContainerStarted","Data":"3cecb76232c927bf31e46cf5118354f1de5488fdb2715dba4177c3b82302abb0"} Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.468314 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-g8wq4" event={"ID":"b594ebbd-4a60-46ca-92f6-0e4869499849","Type":"ContainerDied","Data":"a9fa679bd0a21c32302ee6dadab9bb0146cc8b3b3a6a65fea13a40b241bc3f41"} Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.468379 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9fa679bd0a21c32302ee6dadab9bb0146cc8b3b3a6a65fea13a40b241bc3f41" Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.468681 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-g8wq4" Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.538316 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" event={"ID":"5f5c5f87-b592-4f5d-86bc-3069985ae61a","Type":"ContainerStarted","Data":"60c25f427d255d56940c30e3cf98834c61454d39357bd327a50e1df367b5536f"} Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.538498 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.567568 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b594ebbd-4a60-46ca-92f6-0e4869499849-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.580287 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" podStartSLOduration=8.580265519 podStartE2EDuration="8.580265519s" podCreationTimestamp="2026-02-03 12:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:30:19.577799914 +0000 UTC m=+1537.100875788" watchObservedRunningTime="2026-02-03 12:30:19.580265519 +0000 UTC m=+1537.103341383" Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.591195 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0658b201-7c4e-4d71-ba2d-c2cb5dee1553","Type":"ContainerStarted","Data":"20dd722d66e9625364bf54e86deeb632a5b1f627dff6e3f5f890ff2dd6b81942"} Feb 03 12:30:19 crc kubenswrapper[4820]: I0203 12:30:19.640683 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=27.64066279 podStartE2EDuration="27.64066279s" podCreationTimestamp="2026-02-03 12:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:30:19.632356817 +0000 UTC m=+1537.155432701" watchObservedRunningTime="2026-02-03 12:30:19.64066279 +0000 UTC m=+1537.163738654" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.573801 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 03 12:30:20 crc kubenswrapper[4820]: E0203 12:30:20.575313 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b594ebbd-4a60-46ca-92f6-0e4869499849" containerName="watcher-db-sync" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.575449 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b594ebbd-4a60-46ca-92f6-0e4869499849" containerName="watcher-db-sync" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.575798 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b594ebbd-4a60-46ca-92f6-0e4869499849" containerName="watcher-db-sync" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.577242 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.592952 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.594721 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.622623 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-nprkm" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.622957 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.624814 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.689076 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.691149 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.701667 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd46da3e-bb82-4990-8d29-03f53c601f36-logs\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.703067 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.703124 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-config-data\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.703202 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csbcd\" (UniqueName: \"kubernetes.io/projected/cd46da3e-bb82-4990-8d29-03f53c601f36-kube-api-access-csbcd\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.703241 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-config-data\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.703317 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.703342 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-logs\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.703500 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.703537 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfk7j\" (UniqueName: \"kubernetes.io/projected/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-kube-api-access-cfk7j\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.703562 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.711127 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.713535 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.725623 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.729975 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e42f59b9-8816-4e43-869b-1c6cd36b4034","Type":"ContainerStarted","Data":"d0155325ce5a11833035e52807754667542050292f95bfb8d86eeb0c39ac1451"} Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.737235 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e42f59b9-8816-4e43-869b-1c6cd36b4034" containerName="glance-httpd" containerID="cri-o://d0155325ce5a11833035e52807754667542050292f95bfb8d86eeb0c39ac1451" gracePeriod=30 Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.737403 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e42f59b9-8816-4e43-869b-1c6cd36b4034" containerName="glance-log" containerID="cri-o://e60cf3d68962fbcc86c06d09f1fe6ddc05184f26d23b7d38591959cd6a7268a2" gracePeriod=30 Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.757804 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.806192 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.806248 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfk7j\" (UniqueName: \"kubernetes.io/projected/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-kube-api-access-cfk7j\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.806297 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed16a73-0e39-4ac4-bd01-820e6a7a45b0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"6ed16a73-0e39-4ac4-bd01-820e6a7a45b0\") " pod="openstack/watcher-applier-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.806396 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd46da3e-bb82-4990-8d29-03f53c601f36-logs\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.806424 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.806444 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-config-data\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.806478 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csbcd\" (UniqueName: \"kubernetes.io/projected/cd46da3e-bb82-4990-8d29-03f53c601f36-kube-api-access-csbcd\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.806507 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-config-data\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.806536 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.806554 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-logs\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.806589 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hpcx\" (UniqueName: \"kubernetes.io/projected/6ed16a73-0e39-4ac4-bd01-820e6a7a45b0-kube-api-access-5hpcx\") pod \"watcher-applier-0\" (UID: \"6ed16a73-0e39-4ac4-bd01-820e6a7a45b0\") " pod="openstack/watcher-applier-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.806623 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ed16a73-0e39-4ac4-bd01-820e6a7a45b0-logs\") pod \"watcher-applier-0\" (UID: \"6ed16a73-0e39-4ac4-bd01-820e6a7a45b0\") " pod="openstack/watcher-applier-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.806655 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed16a73-0e39-4ac4-bd01-820e6a7a45b0-config-data\") pod \"watcher-applier-0\" (UID: \"6ed16a73-0e39-4ac4-bd01-820e6a7a45b0\") " pod="openstack/watcher-applier-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.806687 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.819357 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd46da3e-bb82-4990-8d29-03f53c601f36-logs\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.819712 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-logs\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.838086 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.839114 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.937676 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hpcx\" (UniqueName: \"kubernetes.io/projected/6ed16a73-0e39-4ac4-bd01-820e6a7a45b0-kube-api-access-5hpcx\") pod \"watcher-applier-0\" (UID: \"6ed16a73-0e39-4ac4-bd01-820e6a7a45b0\") " pod="openstack/watcher-applier-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.938140 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ed16a73-0e39-4ac4-bd01-820e6a7a45b0-logs\") pod \"watcher-applier-0\" (UID: \"6ed16a73-0e39-4ac4-bd01-820e6a7a45b0\") " pod="openstack/watcher-applier-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.938291 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed16a73-0e39-4ac4-bd01-820e6a7a45b0-config-data\") pod \"watcher-applier-0\" (UID: \"6ed16a73-0e39-4ac4-bd01-820e6a7a45b0\") " pod="openstack/watcher-applier-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.938448 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed16a73-0e39-4ac4-bd01-820e6a7a45b0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"6ed16a73-0e39-4ac4-bd01-820e6a7a45b0\") " pod="openstack/watcher-applier-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.943467 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ed16a73-0e39-4ac4-bd01-820e6a7a45b0-logs\") pod \"watcher-applier-0\" (UID: \"6ed16a73-0e39-4ac4-bd01-820e6a7a45b0\") " pod="openstack/watcher-applier-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.955942 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed16a73-0e39-4ac4-bd01-820e6a7a45b0-config-data\") pod \"watcher-applier-0\" (UID: \"6ed16a73-0e39-4ac4-bd01-820e6a7a45b0\") " pod="openstack/watcher-applier-0" Feb 03 12:30:20 crc kubenswrapper[4820]: I0203 12:30:20.957376 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed16a73-0e39-4ac4-bd01-820e6a7a45b0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"6ed16a73-0e39-4ac4-bd01-820e6a7a45b0\") " pod="openstack/watcher-applier-0" Feb 03 12:30:21 crc kubenswrapper[4820]: I0203 12:30:21.023344 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-config-data\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:21 crc kubenswrapper[4820]: I0203 12:30:21.023543 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:21 crc kubenswrapper[4820]: I0203 12:30:21.024514 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:21 crc kubenswrapper[4820]: I0203 12:30:21.025248 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-config-data\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:21 crc kubenswrapper[4820]: I0203 12:30:21.028964 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csbcd\" (UniqueName: \"kubernetes.io/projected/cd46da3e-bb82-4990-8d29-03f53c601f36-kube-api-access-csbcd\") pod \"watcher-decision-engine-0\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:30:21 crc kubenswrapper[4820]: I0203 12:30:21.051208 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfk7j\" (UniqueName: \"kubernetes.io/projected/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-kube-api-access-cfk7j\") pod \"watcher-api-0\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " pod="openstack/watcher-api-0" Feb 03 12:30:21 crc kubenswrapper[4820]: I0203 12:30:21.077672 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hpcx\" (UniqueName: \"kubernetes.io/projected/6ed16a73-0e39-4ac4-bd01-820e6a7a45b0-kube-api-access-5hpcx\") pod \"watcher-applier-0\" (UID: \"6ed16a73-0e39-4ac4-bd01-820e6a7a45b0\") " pod="openstack/watcher-applier-0" Feb 03 12:30:21 crc kubenswrapper[4820]: I0203 12:30:21.134653 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.134619663 podStartE2EDuration="11.134619663s" podCreationTimestamp="2026-02-03 12:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:30:21.120555553 +0000 UTC m=+1538.643631417" watchObservedRunningTime="2026-02-03 12:30:21.134619663 +0000 UTC m=+1538.657695527" Feb 03 12:30:21 crc kubenswrapper[4820]: I0203 12:30:21.242953 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 03 12:30:21 crc kubenswrapper[4820]: I0203 12:30:21.255538 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 03 12:30:21 crc kubenswrapper[4820]: I0203 12:30:21.542025 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 03 12:30:21 crc kubenswrapper[4820]: I0203 12:30:21.775351 4820 generic.go:334] "Generic (PLEG): container finished" podID="e42f59b9-8816-4e43-869b-1c6cd36b4034" containerID="e60cf3d68962fbcc86c06d09f1fe6ddc05184f26d23b7d38591959cd6a7268a2" exitCode=143 Feb 03 12:30:21 crc kubenswrapper[4820]: I0203 12:30:21.775800 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e42f59b9-8816-4e43-869b-1c6cd36b4034","Type":"ContainerDied","Data":"e60cf3d68962fbcc86c06d09f1fe6ddc05184f26d23b7d38591959cd6a7268a2"} Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.634097 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c85bb46f7-qz2f2"] Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.762304 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5fdc8588b4-jtjr8"] Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.764415 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.769344 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.802622 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fdc8588b4-jtjr8"] Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.808345 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c5159f2-a29f-4730-8198-8a866761947c","Type":"ContainerStarted","Data":"17f21a70456dca6a0b5ba66bee4e9b444cc18c640de3e1dabd95408e79fa067e"} Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.808609 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1c5159f2-a29f-4730-8198-8a866761947c" containerName="glance-log" containerID="cri-o://3cecb76232c927bf31e46cf5118354f1de5488fdb2715dba4177c3b82302abb0" gracePeriod=30 Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.809312 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1c5159f2-a29f-4730-8198-8a866761947c" containerName="glance-httpd" containerID="cri-o://17f21a70456dca6a0b5ba66bee4e9b444cc18c640de3e1dabd95408e79fa067e" gracePeriod=30 Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.837205 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c371f7-f032-4444-8d4b-1183a224c7b0-scripts\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.837329 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-horizon-secret-key\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.837370 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c371f7-f032-4444-8d4b-1183a224c7b0-logs\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.837425 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-horizon-tls-certs\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.837443 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17c371f7-f032-4444-8d4b-1183a224c7b0-config-data\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.837498 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td9pm\" (UniqueName: \"kubernetes.io/projected/17c371f7-f032-4444-8d4b-1183a224c7b0-kube-api-access-td9pm\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.837531 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-combined-ca-bundle\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.851392 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65c5474c77-d2q2r"] Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.853459 4820 generic.go:334] "Generic (PLEG): container finished" podID="e42f59b9-8816-4e43-869b-1c6cd36b4034" containerID="d0155325ce5a11833035e52807754667542050292f95bfb8d86eeb0c39ac1451" exitCode=0 Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.853508 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e42f59b9-8816-4e43-869b-1c6cd36b4034","Type":"ContainerDied","Data":"d0155325ce5a11833035e52807754667542050292f95bfb8d86eeb0c39ac1451"} Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.927035 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.927006716 podStartE2EDuration="9.927006716s" podCreationTimestamp="2026-02-03 12:30:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:30:22.872519004 +0000 UTC m=+1540.395594878" watchObservedRunningTime="2026-02-03 12:30:22.927006716 +0000 UTC m=+1540.450082580" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.940164 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c371f7-f032-4444-8d4b-1183a224c7b0-scripts\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.941679 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c371f7-f032-4444-8d4b-1183a224c7b0-scripts\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.943644 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-horizon-secret-key\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.944081 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c371f7-f032-4444-8d4b-1183a224c7b0-logs\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.944506 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-horizon-tls-certs\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.944664 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17c371f7-f032-4444-8d4b-1183a224c7b0-config-data\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.945175 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td9pm\" (UniqueName: \"kubernetes.io/projected/17c371f7-f032-4444-8d4b-1183a224c7b0-kube-api-access-td9pm\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.945389 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-combined-ca-bundle\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.945598 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c371f7-f032-4444-8d4b-1183a224c7b0-logs\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.953584 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17c371f7-f032-4444-8d4b-1183a224c7b0-config-data\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:22 crc kubenswrapper[4820]: I0203 12:30:22.978486 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-horizon-tls-certs\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:22.991876 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-horizon-secret-key\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:22.993224 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-combined-ca-bundle\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.041067 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68b4df5bdd-tdb9h"] Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.043663 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.052081 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td9pm\" (UniqueName: \"kubernetes.io/projected/17c371f7-f032-4444-8d4b-1183a224c7b0-kube-api-access-td9pm\") pod \"horizon-5fdc8588b4-jtjr8\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.062447 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68b4df5bdd-tdb9h"] Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.078908 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.126464 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.163575 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/308562dd-6078-4c1c-a4e0-c01a60a2d81d-horizon-tls-certs\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.163672 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308562dd-6078-4c1c-a4e0-c01a60a2d81d-combined-ca-bundle\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.163821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45m28\" (UniqueName: \"kubernetes.io/projected/308562dd-6078-4c1c-a4e0-c01a60a2d81d-kube-api-access-45m28\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.163873 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/308562dd-6078-4c1c-a4e0-c01a60a2d81d-scripts\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.164196 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/308562dd-6078-4c1c-a4e0-c01a60a2d81d-horizon-secret-key\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.164325 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/308562dd-6078-4c1c-a4e0-c01a60a2d81d-config-data\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.164402 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/308562dd-6078-4c1c-a4e0-c01a60a2d81d-logs\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.266362 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/308562dd-6078-4c1c-a4e0-c01a60a2d81d-horizon-tls-certs\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.269012 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308562dd-6078-4c1c-a4e0-c01a60a2d81d-combined-ca-bundle\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.269499 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45m28\" (UniqueName: \"kubernetes.io/projected/308562dd-6078-4c1c-a4e0-c01a60a2d81d-kube-api-access-45m28\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.270268 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/308562dd-6078-4c1c-a4e0-c01a60a2d81d-scripts\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.275576 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/308562dd-6078-4c1c-a4e0-c01a60a2d81d-scripts\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.266525 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.282257 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/308562dd-6078-4c1c-a4e0-c01a60a2d81d-horizon-secret-key\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.282578 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/308562dd-6078-4c1c-a4e0-c01a60a2d81d-config-data\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.283170 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/308562dd-6078-4c1c-a4e0-c01a60a2d81d-logs\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.284042 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/308562dd-6078-4c1c-a4e0-c01a60a2d81d-logs\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.292678 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/308562dd-6078-4c1c-a4e0-c01a60a2d81d-config-data\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.303257 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308562dd-6078-4c1c-a4e0-c01a60a2d81d-combined-ca-bundle\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.310602 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/308562dd-6078-4c1c-a4e0-c01a60a2d81d-horizon-secret-key\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.328350 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/308562dd-6078-4c1c-a4e0-c01a60a2d81d-horizon-tls-certs\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.330175 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45m28\" (UniqueName: \"kubernetes.io/projected/308562dd-6078-4c1c-a4e0-c01a60a2d81d-kube-api-access-45m28\") pod \"horizon-68b4df5bdd-tdb9h\" (UID: \"308562dd-6078-4c1c-a4e0-c01a60a2d81d\") " pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.620544 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:30:23 crc kubenswrapper[4820]: I0203 12:30:23.720057 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 03 12:30:23 crc kubenswrapper[4820]: W0203 12:30:23.780195 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ed16a73_0e39_4ac4_bd01_820e6a7a45b0.slice/crio-ece86826671abd091aff63c7aa8a3f5c3c033d8d6ce5355eefe51a8a30e39687 WatchSource:0}: Error finding container ece86826671abd091aff63c7aa8a3f5c3c033d8d6ce5355eefe51a8a30e39687: Status 404 returned error can't find the container with id ece86826671abd091aff63c7aa8a3f5c3c033d8d6ce5355eefe51a8a30e39687 Feb 03 12:30:24 crc kubenswrapper[4820]: I0203 12:30:24.215975 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 03 12:30:24 crc kubenswrapper[4820]: I0203 12:30:24.218201 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 03 12:30:24 crc kubenswrapper[4820]: I0203 12:30:24.312274 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.245446 4820 generic.go:334] "Generic (PLEG): container finished" podID="1c5159f2-a29f-4730-8198-8a866761947c" containerID="17f21a70456dca6a0b5ba66bee4e9b444cc18c640de3e1dabd95408e79fa067e" exitCode=143 Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.245850 4820 generic.go:334] "Generic (PLEG): container finished" podID="1c5159f2-a29f-4730-8198-8a866761947c" containerID="3cecb76232c927bf31e46cf5118354f1de5488fdb2715dba4177c3b82302abb0" exitCode=143 Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.281489 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c5159f2-a29f-4730-8198-8a866761947c","Type":"ContainerDied","Data":"17f21a70456dca6a0b5ba66bee4e9b444cc18c640de3e1dabd95408e79fa067e"} Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.298868 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c5159f2-a29f-4730-8198-8a866761947c","Type":"ContainerDied","Data":"3cecb76232c927bf31e46cf5118354f1de5488fdb2715dba4177c3b82302abb0"} Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.299333 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"6ed16a73-0e39-4ac4-bd01-820e6a7a45b0","Type":"ContainerStarted","Data":"ece86826671abd091aff63c7aa8a3f5c3c033d8d6ce5355eefe51a8a30e39687"} Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.299436 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fdc8588b4-jtjr8"] Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.307855 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"38fcd454-4d58-42a9-beb1-d8640ab7a9a7","Type":"ContainerStarted","Data":"c674587267136284603eaf6c68dd4cefb215b289c038866ed6b4e1f2b5173adf"} Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.337485 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"cd46da3e-bb82-4990-8d29-03f53c601f36","Type":"ContainerStarted","Data":"34f192d25c3849d5570abba516d21ab74bfe6f71f38d2b880eed43669213e3e1"} Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.371246 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 03 12:30:25 crc kubenswrapper[4820]: W0203 12:30:25.395001 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17c371f7_f032_4444_8d4b_1183a224c7b0.slice/crio-151bf3ee50bda7e46e0b38adbbd029a641e063485a5895b000d842c8672d576d WatchSource:0}: Error finding container 151bf3ee50bda7e46e0b38adbbd029a641e063485a5895b000d842c8672d576d: Status 404 returned error can't find the container with id 151bf3ee50bda7e46e0b38adbbd029a641e063485a5895b000d842c8672d576d Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.513490 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.579919 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68b4df5bdd-tdb9h"] Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.672671 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp2w6\" (UniqueName: \"kubernetes.io/projected/e42f59b9-8816-4e43-869b-1c6cd36b4034-kube-api-access-fp2w6\") pod \"e42f59b9-8816-4e43-869b-1c6cd36b4034\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.672783 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42f59b9-8816-4e43-869b-1c6cd36b4034-logs\") pod \"e42f59b9-8816-4e43-869b-1c6cd36b4034\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.672865 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-combined-ca-bundle\") pod \"e42f59b9-8816-4e43-869b-1c6cd36b4034\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.672966 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-scripts\") pod \"e42f59b9-8816-4e43-869b-1c6cd36b4034\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.673007 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e42f59b9-8816-4e43-869b-1c6cd36b4034-httpd-run\") pod \"e42f59b9-8816-4e43-869b-1c6cd36b4034\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.673068 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"e42f59b9-8816-4e43-869b-1c6cd36b4034\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.673143 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-config-data\") pod \"e42f59b9-8816-4e43-869b-1c6cd36b4034\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.673184 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-internal-tls-certs\") pod \"e42f59b9-8816-4e43-869b-1c6cd36b4034\" (UID: \"e42f59b9-8816-4e43-869b-1c6cd36b4034\") " Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.699069 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e42f59b9-8816-4e43-869b-1c6cd36b4034-logs" (OuterVolumeSpecName: "logs") pod "e42f59b9-8816-4e43-869b-1c6cd36b4034" (UID: "e42f59b9-8816-4e43-869b-1c6cd36b4034"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.699101 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-scripts" (OuterVolumeSpecName: "scripts") pod "e42f59b9-8816-4e43-869b-1c6cd36b4034" (UID: "e42f59b9-8816-4e43-869b-1c6cd36b4034"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.699473 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e42f59b9-8816-4e43-869b-1c6cd36b4034-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e42f59b9-8816-4e43-869b-1c6cd36b4034" (UID: "e42f59b9-8816-4e43-869b-1c6cd36b4034"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.761842 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "e42f59b9-8816-4e43-869b-1c6cd36b4034" (UID: "e42f59b9-8816-4e43-869b-1c6cd36b4034"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.762494 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42f59b9-8816-4e43-869b-1c6cd36b4034-kube-api-access-fp2w6" (OuterVolumeSpecName: "kube-api-access-fp2w6") pod "e42f59b9-8816-4e43-869b-1c6cd36b4034" (UID: "e42f59b9-8816-4e43-869b-1c6cd36b4034"). InnerVolumeSpecName "kube-api-access-fp2w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.786120 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.786194 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e42f59b9-8816-4e43-869b-1c6cd36b4034-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.786265 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.786282 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp2w6\" (UniqueName: \"kubernetes.io/projected/e42f59b9-8816-4e43-869b-1c6cd36b4034-kube-api-access-fp2w6\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.786301 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42f59b9-8816-4e43-869b-1c6cd36b4034-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.850215 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e42f59b9-8816-4e43-869b-1c6cd36b4034" (UID: "e42f59b9-8816-4e43-869b-1c6cd36b4034"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.872496 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-config-data" (OuterVolumeSpecName: "config-data") pod "e42f59b9-8816-4e43-869b-1c6cd36b4034" (UID: "e42f59b9-8816-4e43-869b-1c6cd36b4034"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.888978 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.889274 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.893440 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.900487 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e42f59b9-8816-4e43-869b-1c6cd36b4034" (UID: "e42f59b9-8816-4e43-869b-1c6cd36b4034"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.992374 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42f59b9-8816-4e43-869b-1c6cd36b4034-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:25 crc kubenswrapper[4820]: I0203 12:30:25.992533 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.000750 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.098649 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czrqz\" (UniqueName: \"kubernetes.io/projected/1c5159f2-a29f-4730-8198-8a866761947c-kube-api-access-czrqz\") pod \"1c5159f2-a29f-4730-8198-8a866761947c\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.098827 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"1c5159f2-a29f-4730-8198-8a866761947c\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.098885 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c5159f2-a29f-4730-8198-8a866761947c-httpd-run\") pod \"1c5159f2-a29f-4730-8198-8a866761947c\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.098938 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-public-tls-certs\") pod \"1c5159f2-a29f-4730-8198-8a866761947c\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.098978 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-config-data\") pod \"1c5159f2-a29f-4730-8198-8a866761947c\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.099029 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-scripts\") pod \"1c5159f2-a29f-4730-8198-8a866761947c\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.099119 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5159f2-a29f-4730-8198-8a866761947c-logs\") pod \"1c5159f2-a29f-4730-8198-8a866761947c\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.099156 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-combined-ca-bundle\") pod \"1c5159f2-a29f-4730-8198-8a866761947c\" (UID: \"1c5159f2-a29f-4730-8198-8a866761947c\") " Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.138209 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c5159f2-a29f-4730-8198-8a866761947c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1c5159f2-a29f-4730-8198-8a866761947c" (UID: "1c5159f2-a29f-4730-8198-8a866761947c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.192179 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "1c5159f2-a29f-4730-8198-8a866761947c" (UID: "1c5159f2-a29f-4730-8198-8a866761947c"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.199654 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c5159f2-a29f-4730-8198-8a866761947c-kube-api-access-czrqz" (OuterVolumeSpecName: "kube-api-access-czrqz") pod "1c5159f2-a29f-4730-8198-8a866761947c" (UID: "1c5159f2-a29f-4730-8198-8a866761947c"). InnerVolumeSpecName "kube-api-access-czrqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.202864 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czrqz\" (UniqueName: \"kubernetes.io/projected/1c5159f2-a29f-4730-8198-8a866761947c-kube-api-access-czrqz\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.203165 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.203262 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c5159f2-a29f-4730-8198-8a866761947c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.202858 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c5159f2-a29f-4730-8198-8a866761947c-logs" (OuterVolumeSpecName: "logs") pod "1c5159f2-a29f-4730-8198-8a866761947c" (UID: "1c5159f2-a29f-4730-8198-8a866761947c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.214840 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-scripts" (OuterVolumeSpecName: "scripts") pod "1c5159f2-a29f-4730-8198-8a866761947c" (UID: "1c5159f2-a29f-4730-8198-8a866761947c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.214977 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c5159f2-a29f-4730-8198-8a866761947c" (UID: "1c5159f2-a29f-4730-8198-8a866761947c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.252496 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.307579 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.307632 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5159f2-a29f-4730-8198-8a866761947c-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.307647 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.307661 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.347109 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1c5159f2-a29f-4730-8198-8a866761947c" (UID: "1c5159f2-a29f-4730-8198-8a866761947c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.357065 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-config-data" (OuterVolumeSpecName: "config-data") pod "1c5159f2-a29f-4730-8198-8a866761947c" (UID: "1c5159f2-a29f-4730-8198-8a866761947c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.362678 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdc8588b4-jtjr8" event={"ID":"17c371f7-f032-4444-8d4b-1183a224c7b0","Type":"ContainerStarted","Data":"151bf3ee50bda7e46e0b38adbbd029a641e063485a5895b000d842c8672d576d"} Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.375413 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b4df5bdd-tdb9h" event={"ID":"308562dd-6078-4c1c-a4e0-c01a60a2d81d","Type":"ContainerStarted","Data":"418363ceb89e48595e7308e8498cce7faacffc8dc6a82c040532af5a73e7f069"} Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.380594 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e42f59b9-8816-4e43-869b-1c6cd36b4034","Type":"ContainerDied","Data":"353cf6b92bb5a5c5a77413dfdb49f2e679a6ea11e7cb17adc1a68cf09ac13839"} Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.380659 4820 scope.go:117] "RemoveContainer" containerID="d0155325ce5a11833035e52807754667542050292f95bfb8d86eeb0c39ac1451" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.380796 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.387496 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"38fcd454-4d58-42a9-beb1-d8640ab7a9a7","Type":"ContainerStarted","Data":"787a1b81b7bb69681d5b9957147401714226f456d02afe437024d5c4f7a745b3"} Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.397035 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.397176 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c5159f2-a29f-4730-8198-8a866761947c","Type":"ContainerDied","Data":"94df5edb9b14d259fdcd9f75c243a760bb2d6db14bc4a73a9976025723a3e5cd"} Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.409924 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.409954 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5159f2-a29f-4730-8198-8a866761947c-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.494826 4820 scope.go:117] "RemoveContainer" containerID="e60cf3d68962fbcc86c06d09f1fe6ddc05184f26d23b7d38591959cd6a7268a2" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.512485 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.532362 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.562993 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:30:26 crc kubenswrapper[4820]: E0203 12:30:26.563646 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42f59b9-8816-4e43-869b-1c6cd36b4034" containerName="glance-log" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.563665 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42f59b9-8816-4e43-869b-1c6cd36b4034" containerName="glance-log" Feb 03 12:30:26 crc kubenswrapper[4820]: E0203 12:30:26.563690 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42f59b9-8816-4e43-869b-1c6cd36b4034" containerName="glance-httpd" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.563698 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42f59b9-8816-4e43-869b-1c6cd36b4034" containerName="glance-httpd" Feb 03 12:30:26 crc kubenswrapper[4820]: E0203 12:30:26.563719 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5159f2-a29f-4730-8198-8a866761947c" containerName="glance-httpd" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.563727 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5159f2-a29f-4730-8198-8a866761947c" containerName="glance-httpd" Feb 03 12:30:26 crc kubenswrapper[4820]: E0203 12:30:26.563739 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5159f2-a29f-4730-8198-8a866761947c" containerName="glance-log" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.563745 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5159f2-a29f-4730-8198-8a866761947c" containerName="glance-log" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.564047 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42f59b9-8816-4e43-869b-1c6cd36b4034" containerName="glance-log" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.564081 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42f59b9-8816-4e43-869b-1c6cd36b4034" containerName="glance-httpd" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.564101 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5159f2-a29f-4730-8198-8a866761947c" containerName="glance-httpd" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.564116 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5159f2-a29f-4730-8198-8a866761947c" containerName="glance-log" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.565761 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.576008 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.576251 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.576511 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rp7hq" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.584713 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.589093 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.607097 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.639153 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.689601 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.705135 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.714687 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.714761 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.722695 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.722787 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.722826 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.726732 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.726824 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-logs\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.726856 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.726941 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.727204 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.727247 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2nf9\" (UniqueName: \"kubernetes.io/projected/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-kube-api-access-q2nf9\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.787777 4820 scope.go:117] "RemoveContainer" containerID="17f21a70456dca6a0b5ba66bee4e9b444cc18c640de3e1dabd95408e79fa067e" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.829670 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-logs\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.829804 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.829843 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-logs\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.829872 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.829928 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.829978 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.830046 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.830075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2nf9\" (UniqueName: \"kubernetes.io/projected/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-kube-api-access-q2nf9\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.830123 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-config-data\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.830163 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.830185 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.830213 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4bf4\" (UniqueName: \"kubernetes.io/projected/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-kube-api-access-q4bf4\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.830246 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.830271 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-scripts\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.830291 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.830312 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.832278 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.835240 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-logs\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.838459 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.841731 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.841831 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.845655 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.846342 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.864132 4820 scope.go:117] "RemoveContainer" containerID="3cecb76232c927bf31e46cf5118354f1de5488fdb2715dba4177c3b82302abb0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.867491 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2nf9\" (UniqueName: \"kubernetes.io/projected/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-kube-api-access-q2nf9\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.889206 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.939981 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-logs\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.940658 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.940802 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.940955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-config-data\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.941129 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.941247 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.941385 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4bf4\" (UniqueName: \"kubernetes.io/projected/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-kube-api-access-q4bf4\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.941554 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-scripts\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.942274 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-logs\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.942804 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.949416 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-scripts\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.949993 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.950853 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.956205 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.970686 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4bf4\" (UniqueName: \"kubernetes.io/projected/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-kube-api-access-q4bf4\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.984742 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:26 crc kubenswrapper[4820]: I0203 12:30:26.986409 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-config-data\") pod \"glance-default-external-api-0\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:30:27 crc kubenswrapper[4820]: I0203 12:30:27.049670 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:30:27 crc kubenswrapper[4820]: I0203 12:30:27.208775 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c5159f2-a29f-4730-8198-8a866761947c" path="/var/lib/kubelet/pods/1c5159f2-a29f-4730-8198-8a866761947c/volumes" Feb 03 12:30:27 crc kubenswrapper[4820]: I0203 12:30:27.209670 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e42f59b9-8816-4e43-869b-1c6cd36b4034" path="/var/lib/kubelet/pods/e42f59b9-8816-4e43-869b-1c6cd36b4034/volumes" Feb 03 12:30:27 crc kubenswrapper[4820]: I0203 12:30:27.223414 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:30:27 crc kubenswrapper[4820]: I0203 12:30:27.491700 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"38fcd454-4d58-42a9-beb1-d8640ab7a9a7","Type":"ContainerStarted","Data":"360a324625000b7a0475ad7525e796f30c7043aacaa84aa615bc8a7ff9641dd4"} Feb 03 12:30:27 crc kubenswrapper[4820]: I0203 12:30:27.493312 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 03 12:30:27 crc kubenswrapper[4820]: I0203 12:30:27.543377 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=7.543352276 podStartE2EDuration="7.543352276s" podCreationTimestamp="2026-02-03 12:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:30:27.521638099 +0000 UTC m=+1545.044713983" watchObservedRunningTime="2026-02-03 12:30:27.543352276 +0000 UTC m=+1545.066428150" Feb 03 12:30:27 crc kubenswrapper[4820]: I0203 12:30:27.792185 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:30:27 crc kubenswrapper[4820]: I0203 12:30:27.886595 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zbjrj"] Feb 03 12:30:27 crc kubenswrapper[4820]: I0203 12:30:27.886847 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-zbjrj" podUID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" containerName="dnsmasq-dns" containerID="cri-o://698433907a92a5ca9104a622432656cc6950a420156cd457563a40aab7b4ca99" gracePeriod=10 Feb 03 12:30:28 crc kubenswrapper[4820]: I0203 12:30:28.192566 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:30:28 crc kubenswrapper[4820]: I0203 12:30:28.531250 4820 generic.go:334] "Generic (PLEG): container finished" podID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" containerID="698433907a92a5ca9104a622432656cc6950a420156cd457563a40aab7b4ca99" exitCode=0 Feb 03 12:30:28 crc kubenswrapper[4820]: I0203 12:30:28.531346 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zbjrj" event={"ID":"60e67a4a-d840-4bc2-9f74-4a5fbb36a829","Type":"ContainerDied","Data":"698433907a92a5ca9104a622432656cc6950a420156cd457563a40aab7b4ca99"} Feb 03 12:30:28 crc kubenswrapper[4820]: I0203 12:30:28.538002 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:30:28 crc kubenswrapper[4820]: I0203 12:30:28.542576 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c88a5c9-941c-40ef-a3c8-7ff304ea0517" containerID="a1ff861ad6ee50e7673d412707100050bf5dc95a1a5eef2f3c9d1d19ec15a594" exitCode=0 Feb 03 12:30:28 crc kubenswrapper[4820]: I0203 12:30:28.542856 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gjsm4" event={"ID":"2c88a5c9-941c-40ef-a3c8-7ff304ea0517","Type":"ContainerDied","Data":"a1ff861ad6ee50e7673d412707100050bf5dc95a1a5eef2f3c9d1d19ec15a594"} Feb 03 12:30:29 crc kubenswrapper[4820]: W0203 12:30:29.533164 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod457bfab7_1523_4ef8_b7f1_a6d0d54351e4.slice/crio-d45bdb4770b445ffc97f0ec96c8b9a5d9365d92d7fbbf9cea253be97efaec3ea WatchSource:0}: Error finding container d45bdb4770b445ffc97f0ec96c8b9a5d9365d92d7fbbf9cea253be97efaec3ea: Status 404 returned error can't find the container with id d45bdb4770b445ffc97f0ec96c8b9a5d9365d92d7fbbf9cea253be97efaec3ea Feb 03 12:30:29 crc kubenswrapper[4820]: I0203 12:30:29.573987 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:30:29 crc kubenswrapper[4820]: I0203 12:30:29.574009 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"457bfab7-1523-4ef8-b7f1-a6d0d54351e4","Type":"ContainerStarted","Data":"d45bdb4770b445ffc97f0ec96c8b9a5d9365d92d7fbbf9cea253be97efaec3ea"} Feb 03 12:30:30 crc kubenswrapper[4820]: I0203 12:30:30.605477 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edffb607-1bfe-4aa0-a39a-2f65dbd5077b","Type":"ContainerStarted","Data":"a761c4033520fcbb4f178b0c08839803c438a9ab1c9025509095c44e3052b6f2"} Feb 03 12:30:31 crc kubenswrapper[4820]: I0203 12:30:31.243422 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 03 12:30:31 crc kubenswrapper[4820]: I0203 12:30:31.243479 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 03 12:30:31 crc kubenswrapper[4820]: I0203 12:30:31.243571 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:30:31 crc kubenswrapper[4820]: I0203 12:30:31.366116 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:30:31 crc kubenswrapper[4820]: I0203 12:30:31.366222 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:30:31 crc kubenswrapper[4820]: I0203 12:30:31.653643 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 03 12:30:31 crc kubenswrapper[4820]: I0203 12:30:31.655590 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 03 12:30:32 crc kubenswrapper[4820]: I0203 12:30:32.676331 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 03 12:30:35 crc kubenswrapper[4820]: I0203 12:30:35.754189 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-zbjrj" podUID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Feb 03 12:30:36 crc kubenswrapper[4820]: I0203 12:30:36.723257 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 03 12:30:36 crc kubenswrapper[4820]: I0203 12:30:36.724028 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api" containerID="cri-o://360a324625000b7a0475ad7525e796f30c7043aacaa84aa615bc8a7ff9641dd4" gracePeriod=30 Feb 03 12:30:36 crc kubenswrapper[4820]: I0203 12:30:36.723796 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api-log" containerID="cri-o://787a1b81b7bb69681d5b9957147401714226f456d02afe437024d5c4f7a745b3" gracePeriod=30 Feb 03 12:30:37 crc kubenswrapper[4820]: I0203 12:30:37.254305 4820 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod973038d9-ba67-4fbe-8239-ed6e47f3cf90"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod973038d9-ba67-4fbe-8239-ed6e47f3cf90] : Timed out while waiting for systemd to remove kubepods-besteffort-pod973038d9_ba67_4fbe_8239_ed6e47f3cf90.slice" Feb 03 12:30:37 crc kubenswrapper[4820]: E0203 12:30:37.254834 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod973038d9-ba67-4fbe-8239-ed6e47f3cf90] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod973038d9-ba67-4fbe-8239-ed6e47f3cf90] : Timed out while waiting for systemd to remove kubepods-besteffort-pod973038d9_ba67_4fbe_8239_ed6e47f3cf90.slice" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" podUID="973038d9-ba67-4fbe-8239-ed6e47f3cf90" Feb 03 12:30:37 crc kubenswrapper[4820]: I0203 12:30:37.763472 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"38fcd454-4d58-42a9-beb1-d8640ab7a9a7","Type":"ContainerDied","Data":"787a1b81b7bb69681d5b9957147401714226f456d02afe437024d5c4f7a745b3"} Feb 03 12:30:37 crc kubenswrapper[4820]: I0203 12:30:37.763413 4820 generic.go:334] "Generic (PLEG): container finished" podID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerID="787a1b81b7bb69681d5b9957147401714226f456d02afe437024d5c4f7a745b3" exitCode=143 Feb 03 12:30:37 crc kubenswrapper[4820]: I0203 12:30:37.763645 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-vbrpj" Feb 03 12:30:37 crc kubenswrapper[4820]: I0203 12:30:37.822658 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-vbrpj"] Feb 03 12:30:37 crc kubenswrapper[4820]: I0203 12:30:37.836204 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-vbrpj"] Feb 03 12:30:38 crc kubenswrapper[4820]: E0203 12:30:38.387933 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 03 12:30:38 crc kubenswrapper[4820]: E0203 12:30:38.388449 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9xfqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-9csj4_openstack(470b8f27-2959-4890-aed3-361530b83b73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:30:38 crc kubenswrapper[4820]: E0203 12:30:38.389635 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-9csj4" podUID="470b8f27-2959-4890-aed3-361530b83b73" Feb 03 12:30:38 crc kubenswrapper[4820]: E0203 12:30:38.777846 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-9csj4" podUID="470b8f27-2959-4890-aed3-361530b83b73" Feb 03 12:30:38 crc kubenswrapper[4820]: E0203 12:30:38.890465 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-decision-engine:watcher_latest" Feb 03 12:30:38 crc kubenswrapper[4820]: E0203 12:30:38.890839 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-decision-engine:watcher_latest" Feb 03 12:30:38 crc kubenswrapper[4820]: E0203 12:30:38.891170 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-decision-engine,Image:38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-decision-engine:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd4h88hf8h567h594h674h648h559h85h5dfh56hcch5d8hc6hf4h58h7dh659h68h77h54dh67ch5d8h7dh6dhd7h5d7h9fh555hc4h5dbh5dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-decision-engine-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/watcher,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:custom-prometheus-ca,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/prometheus/ca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csbcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42451,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-decision-engine-0_openstack(cd46da3e-bb82-4990-8d29-03f53c601f36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:30:38 crc kubenswrapper[4820]: E0203 12:30:38.892538 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-decision-engine-0" podUID="cd46da3e-bb82-4990-8d29-03f53c601f36" Feb 03 12:30:39 crc kubenswrapper[4820]: I0203 12:30:39.157545 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973038d9-ba67-4fbe-8239-ed6e47f3cf90" path="/var/lib/kubelet/pods/973038d9-ba67-4fbe-8239-ed6e47f3cf90/volumes" Feb 03 12:30:39 crc kubenswrapper[4820]: E0203 12:30:39.846556 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-decision-engine:watcher_latest\\\"\"" pod="openstack/watcher-decision-engine-0" podUID="cd46da3e-bb82-4990-8d29-03f53c601f36" Feb 03 12:30:41 crc kubenswrapper[4820]: I0203 12:30:41.302530 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-zbjrj" podUID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Feb 03 12:30:41 crc kubenswrapper[4820]: I0203 12:30:41.319168 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 03 12:30:41 crc kubenswrapper[4820]: I0203 12:30:41.320708 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 03 12:30:41 crc kubenswrapper[4820]: I0203 12:30:41.371201 4820 generic.go:334] "Generic (PLEG): container finished" podID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerID="360a324625000b7a0475ad7525e796f30c7043aacaa84aa615bc8a7ff9641dd4" exitCode=0 Feb 03 12:30:41 crc kubenswrapper[4820]: I0203 12:30:41.371251 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"38fcd454-4d58-42a9-beb1-d8640ab7a9a7","Type":"ContainerDied","Data":"360a324625000b7a0475ad7525e796f30c7043aacaa84aa615bc8a7ff9641dd4"} Feb 03 12:30:44 crc kubenswrapper[4820]: E0203 12:30:44.201014 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 03 12:30:44 crc kubenswrapper[4820]: E0203 12:30:44.201774 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncbh5c5h94h69h5d6h5d8hfchd6h78h8dh564h7fhcfh64fh54h98h5dh68dh546h64dh59h5b7h5f4h88h56ch689h56bh674h664h5fdhf6hfq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fftdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e2cc54f2-167c-4c79-b616-2e1cd122fed2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:30:46 crc kubenswrapper[4820]: I0203 12:30:46.244048 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 03 12:30:46 crc kubenswrapper[4820]: I0203 12:30:46.244049 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 03 12:30:46 crc kubenswrapper[4820]: I0203 12:30:46.302904 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-zbjrj" podUID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Feb 03 12:30:46 crc kubenswrapper[4820]: I0203 12:30:46.303620 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:30:51 crc kubenswrapper[4820]: I0203 12:30:51.244449 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 03 12:30:51 crc kubenswrapper[4820]: I0203 12:30:51.245100 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 03 12:30:51 crc kubenswrapper[4820]: I0203 12:30:51.246165 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 03 12:30:51 crc kubenswrapper[4820]: I0203 12:30:51.246242 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 03 12:30:51 crc kubenswrapper[4820]: I0203 12:30:51.306283 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-zbjrj" podUID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Feb 03 12:30:56 crc kubenswrapper[4820]: I0203 12:30:56.243574 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 03 12:30:56 crc kubenswrapper[4820]: I0203 12:30:56.243633 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 03 12:30:56 crc kubenswrapper[4820]: I0203 12:30:56.557632 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-zbjrj" podUID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.387838 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.557688 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-config-data\") pod \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.557764 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-combined-ca-bundle\") pod \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.557882 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssvrz\" (UniqueName: \"kubernetes.io/projected/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-kube-api-access-ssvrz\") pod \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.558100 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-credential-keys\") pod \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.558189 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-scripts\") pod \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.558221 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-fernet-keys\") pod \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\" (UID: \"2c88a5c9-941c-40ef-a3c8-7ff304ea0517\") " Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.565629 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2c88a5c9-941c-40ef-a3c8-7ff304ea0517" (UID: "2c88a5c9-941c-40ef-a3c8-7ff304ea0517"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.568490 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-kube-api-access-ssvrz" (OuterVolumeSpecName: "kube-api-access-ssvrz") pod "2c88a5c9-941c-40ef-a3c8-7ff304ea0517" (UID: "2c88a5c9-941c-40ef-a3c8-7ff304ea0517"). InnerVolumeSpecName "kube-api-access-ssvrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.570232 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2c88a5c9-941c-40ef-a3c8-7ff304ea0517" (UID: "2c88a5c9-941c-40ef-a3c8-7ff304ea0517"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.570781 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-scripts" (OuterVolumeSpecName: "scripts") pod "2c88a5c9-941c-40ef-a3c8-7ff304ea0517" (UID: "2c88a5c9-941c-40ef-a3c8-7ff304ea0517"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.594798 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-config-data" (OuterVolumeSpecName: "config-data") pod "2c88a5c9-941c-40ef-a3c8-7ff304ea0517" (UID: "2c88a5c9-941c-40ef-a3c8-7ff304ea0517"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.598283 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c88a5c9-941c-40ef-a3c8-7ff304ea0517" (UID: "2c88a5c9-941c-40ef-a3c8-7ff304ea0517"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.662592 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.662801 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.662919 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssvrz\" (UniqueName: \"kubernetes.io/projected/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-kube-api-access-ssvrz\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.663307 4820 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.663436 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:57 crc kubenswrapper[4820]: I0203 12:30:57.663537 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c88a5c9-941c-40ef-a3c8-7ff304ea0517-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:57 crc kubenswrapper[4820]: E0203 12:30:57.825338 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 03 12:30:57 crc kubenswrapper[4820]: E0203 12:30:57.825530 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n685h79h686h5dch57fhbdhf5hb9h558h57dh557h68ch689h5b7h59bh5f4hf6h68h6ch678hd9h76hb4h59ch5cfh5bbh67ch675h678h574h56fh7dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wj2nw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-65c5474c77-d2q2r_openstack(4687f816-7024-4244-b049-d94441b1cef0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:30:57 crc kubenswrapper[4820]: E0203 12:30:57.829218 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-65c5474c77-d2q2r" podUID="4687f816-7024-4244-b049-d94441b1cef0" Feb 03 12:30:57 crc kubenswrapper[4820]: E0203 12:30:57.840172 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 03 12:30:57 crc kubenswrapper[4820]: E0203 12:30:57.840440 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bfh68bh557h557h659h5c4hbh5bh59dh78h67bh56fh5bdh664h5d7h647hcch75h5bh668hdch5f4h9bh5bbh54bh77hf9hdchc9h98h4hcfq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45m28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-68b4df5bdd-tdb9h_openstack(308562dd-6078-4c1c-a4e0-c01a60a2d81d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:30:57 crc kubenswrapper[4820]: E0203 12:30:57.843646 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" Feb 03 12:30:57 crc kubenswrapper[4820]: E0203 12:30:57.894722 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 03 12:30:57 crc kubenswrapper[4820]: E0203 12:30:57.895123 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f6h566hdch5fch668h645hddh695h7fh79hfch5cdh54bh78h7fh5chc8h5c9h695hcbh85h68fh55fh5fh675h547h67dh5c9h647h564hc6h59q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hmfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7c85bb46f7-qz2f2_openstack(bb707be0-2a48-4886-894b-cc7554a1be6f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:30:57 crc kubenswrapper[4820]: E0203 12:30:57.898312 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7c85bb46f7-qz2f2" podUID="bb707be0-2a48-4886-894b-cc7554a1be6f" Feb 03 12:30:57 crc kubenswrapper[4820]: E0203 12:30:57.926100 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 03 12:30:57 crc kubenswrapper[4820]: E0203 12:30:57.926305 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n548hb8h574h657h594h65h56fh666h86hcfhf6h5dfh7h88h667hdfhd6h5f7h664h68fh596h648h659h67h58dh6fhb9h686h9fhcbh64ch559q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4mm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-75b875c965-2f4nl_openstack(64388daf-4e84-4468-a9cb-484c0a4a8ab2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:30:57 crc kubenswrapper[4820]: E0203 12:30:57.929598 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-75b875c965-2f4nl" podUID="64388daf-4e84-4468-a9cb-484c0a4a8ab2" Feb 03 12:30:58 crc kubenswrapper[4820]: I0203 12:30:58.323803 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gjsm4" event={"ID":"2c88a5c9-941c-40ef-a3c8-7ff304ea0517","Type":"ContainerDied","Data":"55658c94bb3910517c0386f573788ae49cf5f61ff7f3f996ad20f8fe0100e516"} Feb 03 12:30:58 crc kubenswrapper[4820]: I0203 12:30:58.323899 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55658c94bb3910517c0386f573788ae49cf5f61ff7f3f996ad20f8fe0100e516" Feb 03 12:30:58 crc kubenswrapper[4820]: I0203 12:30:58.324041 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gjsm4" Feb 03 12:30:58 crc kubenswrapper[4820]: E0203 12:30:58.330467 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" Feb 03 12:30:58 crc kubenswrapper[4820]: I0203 12:30:58.597994 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-gjsm4"] Feb 03 12:30:58 crc kubenswrapper[4820]: I0203 12:30:58.889478 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-gjsm4"] Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.016817 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-t4pzw"] Feb 03 12:30:59 crc kubenswrapper[4820]: E0203 12:30:59.017616 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c88a5c9-941c-40ef-a3c8-7ff304ea0517" containerName="keystone-bootstrap" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.017648 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c88a5c9-941c-40ef-a3c8-7ff304ea0517" containerName="keystone-bootstrap" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.018042 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c88a5c9-941c-40ef-a3c8-7ff304ea0517" containerName="keystone-bootstrap" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.019073 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.023590 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.023852 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rvkjd" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.023962 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.023879 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.024158 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.034333 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t4pzw"] Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.290195 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8jhw\" (UniqueName: \"kubernetes.io/projected/d6da87e1-3451-48c6-b2ad-368bf3139a57-kube-api-access-k8jhw\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.292032 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-fernet-keys\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.292109 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-scripts\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.292336 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-config-data\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.292407 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-combined-ca-bundle\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.292509 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-credential-keys\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.345109 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c88a5c9-941c-40ef-a3c8-7ff304ea0517" path="/var/lib/kubelet/pods/2c88a5c9-941c-40ef-a3c8-7ff304ea0517/volumes" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.394954 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8jhw\" (UniqueName: \"kubernetes.io/projected/d6da87e1-3451-48c6-b2ad-368bf3139a57-kube-api-access-k8jhw\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.395075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-fernet-keys\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.395125 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-scripts\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.395525 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-config-data\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.395579 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-combined-ca-bundle\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.395670 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-credential-keys\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.407446 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-config-data\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.414366 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-scripts\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.416433 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-credential-keys\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.417741 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-fernet-keys\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.418240 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-combined-ca-bundle\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.419609 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8jhw\" (UniqueName: \"kubernetes.io/projected/d6da87e1-3451-48c6-b2ad-368bf3139a57-kube-api-access-k8jhw\") pod \"keystone-bootstrap-t4pzw\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.648329 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:30:59 crc kubenswrapper[4820]: E0203 12:30:59.735540 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 03 12:30:59 crc kubenswrapper[4820]: E0203 12:30:59.735783 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhp8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-xsjm7_openstack(f4116aff-b63f-47f1-b4bd-5bde84226d87): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:30:59 crc kubenswrapper[4820]: E0203 12:30:59.737048 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-xsjm7" podUID="f4116aff-b63f-47f1-b4bd-5bde84226d87" Feb 03 12:30:59 crc kubenswrapper[4820]: I0203 12:30:59.812939 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:30:59 crc kubenswrapper[4820]: E0203 12:30:59.813625 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 03 12:30:59 crc kubenswrapper[4820]: E0203 12:30:59.814137 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n86h685h565h75h9dh5d6h567h5bchcch664h55dhdfh64h88h5ch5cfh5b4h4h5cch677h65fhc4h78hfchcfh5bh5c7h688h8bh5b4h96hc9q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-td9pm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5fdc8588b4-jtjr8_openstack(17c371f7-f032-4444-8d4b-1183a224c7b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:30:59 crc kubenswrapper[4820]: E0203 12:30:59.817976 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.009020 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-config\") pod \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.009091 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-ovsdbserver-sb\") pod \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.011210 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-ovsdbserver-nb\") pod \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.011472 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8hx8\" (UniqueName: \"kubernetes.io/projected/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-kube-api-access-x8hx8\") pod \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.011506 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-dns-svc\") pod \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\" (UID: \"60e67a4a-d840-4bc2-9f74-4a5fbb36a829\") " Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.021236 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-kube-api-access-x8hx8" (OuterVolumeSpecName: "kube-api-access-x8hx8") pod "60e67a4a-d840-4bc2-9f74-4a5fbb36a829" (UID: "60e67a4a-d840-4bc2-9f74-4a5fbb36a829"). InnerVolumeSpecName "kube-api-access-x8hx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.261381 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8hx8\" (UniqueName: \"kubernetes.io/projected/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-kube-api-access-x8hx8\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.296510 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "60e67a4a-d840-4bc2-9f74-4a5fbb36a829" (UID: "60e67a4a-d840-4bc2-9f74-4a5fbb36a829"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.302789 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-config" (OuterVolumeSpecName: "config") pod "60e67a4a-d840-4bc2-9f74-4a5fbb36a829" (UID: "60e67a4a-d840-4bc2-9f74-4a5fbb36a829"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.312400 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "60e67a4a-d840-4bc2-9f74-4a5fbb36a829" (UID: "60e67a4a-d840-4bc2-9f74-4a5fbb36a829"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.326996 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "60e67a4a-d840-4bc2-9f74-4a5fbb36a829" (UID: "60e67a4a-d840-4bc2-9f74-4a5fbb36a829"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.363659 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.364011 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.364111 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.364192 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60e67a4a-d840-4bc2-9f74-4a5fbb36a829-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.378525 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-zbjrj" Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.379046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zbjrj" event={"ID":"60e67a4a-d840-4bc2-9f74-4a5fbb36a829","Type":"ContainerDied","Data":"cf8c6890c857e094df065f12558cdcbf428549b66f887e8b566ee0b472b3cc06"} Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.379133 4820 scope.go:117] "RemoveContainer" containerID="698433907a92a5ca9104a622432656cc6950a420156cd457563a40aab7b4ca99" Feb 03 12:31:00 crc kubenswrapper[4820]: E0203 12:31:00.382184 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" Feb 03 12:31:00 crc kubenswrapper[4820]: E0203 12:31:00.385149 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-xsjm7" podUID="f4116aff-b63f-47f1-b4bd-5bde84226d87" Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.468364 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zbjrj"] Feb 03 12:31:00 crc kubenswrapper[4820]: I0203 12:31:00.477570 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zbjrj"] Feb 03 12:31:01 crc kubenswrapper[4820]: I0203 12:31:01.157867 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" path="/var/lib/kubelet/pods/60e67a4a-d840-4bc2-9f74-4a5fbb36a829/volumes" Feb 03 12:31:01 crc kubenswrapper[4820]: I0203 12:31:01.243779 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 03 12:31:01 crc kubenswrapper[4820]: I0203 12:31:01.243795 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 03 12:31:01 crc kubenswrapper[4820]: I0203 12:31:01.366204 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:31:01 crc kubenswrapper[4820]: I0203 12:31:01.366579 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:31:01 crc kubenswrapper[4820]: I0203 12:31:01.740175 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-zbjrj" podUID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Feb 03 12:31:06 crc kubenswrapper[4820]: I0203 12:31:06.247541 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 03 12:31:06 crc kubenswrapper[4820]: I0203 12:31:06.248517 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 03 12:31:08 crc kubenswrapper[4820]: I0203 12:31:08.922758 4820 generic.go:334] "Generic (PLEG): container finished" podID="06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb" containerID="34d519c618eba6d21f8bdc59e5fbc6e2f30a0da9b52b4c66f1dcbedd3137aa91" exitCode=0 Feb 03 12:31:08 crc kubenswrapper[4820]: I0203 12:31:08.922822 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lb2jr" event={"ID":"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb","Type":"ContainerDied","Data":"34d519c618eba6d21f8bdc59e5fbc6e2f30a0da9b52b4c66f1dcbedd3137aa91"} Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.244353 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.244349 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 03 12:31:11 crc kubenswrapper[4820]: E0203 12:31:11.499430 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 03 12:31:11 crc kubenswrapper[4820]: E0203 12:31:11.499833 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9xfqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-9csj4_openstack(470b8f27-2959-4890-aed3-361530b83b73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:31:11 crc kubenswrapper[4820]: E0203 12:31:11.501020 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-9csj4" podUID="470b8f27-2959-4890-aed3-361530b83b73" Feb 03 12:31:11 crc kubenswrapper[4820]: E0203 12:31:11.569589 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-decision-engine:watcher_latest" Feb 03 12:31:11 crc kubenswrapper[4820]: E0203 12:31:11.569658 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-decision-engine:watcher_latest" Feb 03 12:31:11 crc kubenswrapper[4820]: E0203 12:31:11.569872 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-decision-engine,Image:38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-decision-engine:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd4h88hf8h567h594h674h648h559h85h5dfh56hcch5d8hc6hf4h58h7dh659h68h77h54dh67ch5d8h7dh6dhd7h5d7h9fh555hc4h5dbh5dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-decision-engine-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/watcher,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:custom-prometheus-ca,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/prometheus/ca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csbcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42451,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-decision-engine-0_openstack(cd46da3e-bb82-4990-8d29-03f53c601f36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:31:11 crc kubenswrapper[4820]: E0203 12:31:11.571108 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-decision-engine-0" podUID="cd46da3e-bb82-4990-8d29-03f53c601f36" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.703940 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.712328 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.735730 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.744076 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lb2jr" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.822291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hmfz\" (UniqueName: \"kubernetes.io/projected/bb707be0-2a48-4886-894b-cc7554a1be6f-kube-api-access-2hmfz\") pod \"bb707be0-2a48-4886-894b-cc7554a1be6f\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.822749 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb707be0-2a48-4886-894b-cc7554a1be6f-config-data\") pod \"bb707be0-2a48-4886-894b-cc7554a1be6f\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.822844 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4687f816-7024-4244-b049-d94441b1cef0-horizon-secret-key\") pod \"4687f816-7024-4244-b049-d94441b1cef0\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.823988 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4687f816-7024-4244-b049-d94441b1cef0-scripts\") pod \"4687f816-7024-4244-b049-d94441b1cef0\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.824006 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb707be0-2a48-4886-894b-cc7554a1be6f-config-data" (OuterVolumeSpecName: "config-data") pod "bb707be0-2a48-4886-894b-cc7554a1be6f" (UID: "bb707be0-2a48-4886-894b-cc7554a1be6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.824030 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb707be0-2a48-4886-894b-cc7554a1be6f-logs\") pod \"bb707be0-2a48-4886-894b-cc7554a1be6f\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.824128 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4687f816-7024-4244-b049-d94441b1cef0-config-data\") pod \"4687f816-7024-4244-b049-d94441b1cef0\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.824179 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bb707be0-2a48-4886-894b-cc7554a1be6f-horizon-secret-key\") pod \"bb707be0-2a48-4886-894b-cc7554a1be6f\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.824262 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj2nw\" (UniqueName: \"kubernetes.io/projected/4687f816-7024-4244-b049-d94441b1cef0-kube-api-access-wj2nw\") pod \"4687f816-7024-4244-b049-d94441b1cef0\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.824491 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb707be0-2a48-4886-894b-cc7554a1be6f-scripts\") pod \"bb707be0-2a48-4886-894b-cc7554a1be6f\" (UID: \"bb707be0-2a48-4886-894b-cc7554a1be6f\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.824539 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4687f816-7024-4244-b049-d94441b1cef0-logs\") pod \"4687f816-7024-4244-b049-d94441b1cef0\" (UID: \"4687f816-7024-4244-b049-d94441b1cef0\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.825320 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4687f816-7024-4244-b049-d94441b1cef0-logs" (OuterVolumeSpecName: "logs") pod "4687f816-7024-4244-b049-d94441b1cef0" (UID: "4687f816-7024-4244-b049-d94441b1cef0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.825652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4687f816-7024-4244-b049-d94441b1cef0-scripts" (OuterVolumeSpecName: "scripts") pod "4687f816-7024-4244-b049-d94441b1cef0" (UID: "4687f816-7024-4244-b049-d94441b1cef0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.825776 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb707be0-2a48-4886-894b-cc7554a1be6f-scripts" (OuterVolumeSpecName: "scripts") pod "bb707be0-2a48-4886-894b-cc7554a1be6f" (UID: "bb707be0-2a48-4886-894b-cc7554a1be6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.826053 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4687f816-7024-4244-b049-d94441b1cef0-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.826097 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb707be0-2a48-4886-894b-cc7554a1be6f-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.826110 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4687f816-7024-4244-b049-d94441b1cef0-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.826121 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb707be0-2a48-4886-894b-cc7554a1be6f-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.826861 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4687f816-7024-4244-b049-d94441b1cef0-config-data" (OuterVolumeSpecName: "config-data") pod "4687f816-7024-4244-b049-d94441b1cef0" (UID: "4687f816-7024-4244-b049-d94441b1cef0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.828447 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb707be0-2a48-4886-894b-cc7554a1be6f-logs" (OuterVolumeSpecName: "logs") pod "bb707be0-2a48-4886-894b-cc7554a1be6f" (UID: "bb707be0-2a48-4886-894b-cc7554a1be6f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.830179 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4687f816-7024-4244-b049-d94441b1cef0-kube-api-access-wj2nw" (OuterVolumeSpecName: "kube-api-access-wj2nw") pod "4687f816-7024-4244-b049-d94441b1cef0" (UID: "4687f816-7024-4244-b049-d94441b1cef0"). InnerVolumeSpecName "kube-api-access-wj2nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.831166 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb707be0-2a48-4886-894b-cc7554a1be6f-kube-api-access-2hmfz" (OuterVolumeSpecName: "kube-api-access-2hmfz") pod "bb707be0-2a48-4886-894b-cc7554a1be6f" (UID: "bb707be0-2a48-4886-894b-cc7554a1be6f"). InnerVolumeSpecName "kube-api-access-2hmfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.835081 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4687f816-7024-4244-b049-d94441b1cef0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4687f816-7024-4244-b049-d94441b1cef0" (UID: "4687f816-7024-4244-b049-d94441b1cef0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.967202 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-combined-ca-bundle\") pod \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\" (UID: \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.967362 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-config\") pod \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\" (UID: \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.967406 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64388daf-4e84-4468-a9cb-484c0a4a8ab2-scripts\") pod \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.967493 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64388daf-4e84-4468-a9cb-484c0a4a8ab2-config-data\") pod \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.967520 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64388daf-4e84-4468-a9cb-484c0a4a8ab2-horizon-secret-key\") pod \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.967574 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64388daf-4e84-4468-a9cb-484c0a4a8ab2-logs\") pod \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.967698 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6slrc\" (UniqueName: \"kubernetes.io/projected/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-kube-api-access-6slrc\") pod \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\" (UID: \"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.967759 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4mm6\" (UniqueName: \"kubernetes.io/projected/64388daf-4e84-4468-a9cb-484c0a4a8ab2-kube-api-access-p4mm6\") pod \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\" (UID: \"64388daf-4e84-4468-a9cb-484c0a4a8ab2\") " Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.968460 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4687f816-7024-4244-b049-d94441b1cef0-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.968482 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj2nw\" (UniqueName: \"kubernetes.io/projected/4687f816-7024-4244-b049-d94441b1cef0-kube-api-access-wj2nw\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.968495 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hmfz\" (UniqueName: \"kubernetes.io/projected/bb707be0-2a48-4886-894b-cc7554a1be6f-kube-api-access-2hmfz\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.968514 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4687f816-7024-4244-b049-d94441b1cef0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.968523 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb707be0-2a48-4886-894b-cc7554a1be6f-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.968446 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64388daf-4e84-4468-a9cb-484c0a4a8ab2-logs" (OuterVolumeSpecName: "logs") pod "64388daf-4e84-4468-a9cb-484c0a4a8ab2" (UID: "64388daf-4e84-4468-a9cb-484c0a4a8ab2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.969194 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64388daf-4e84-4468-a9cb-484c0a4a8ab2-config-data" (OuterVolumeSpecName: "config-data") pod "64388daf-4e84-4468-a9cb-484c0a4a8ab2" (UID: "64388daf-4e84-4468-a9cb-484c0a4a8ab2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.969932 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64388daf-4e84-4468-a9cb-484c0a4a8ab2-scripts" (OuterVolumeSpecName: "scripts") pod "64388daf-4e84-4468-a9cb-484c0a4a8ab2" (UID: "64388daf-4e84-4468-a9cb-484c0a4a8ab2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.974674 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64388daf-4e84-4468-a9cb-484c0a4a8ab2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "64388daf-4e84-4468-a9cb-484c0a4a8ab2" (UID: "64388daf-4e84-4468-a9cb-484c0a4a8ab2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.977067 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb707be0-2a48-4886-894b-cc7554a1be6f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "bb707be0-2a48-4886-894b-cc7554a1be6f" (UID: "bb707be0-2a48-4886-894b-cc7554a1be6f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.983303 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64388daf-4e84-4468-a9cb-484c0a4a8ab2-kube-api-access-p4mm6" (OuterVolumeSpecName: "kube-api-access-p4mm6") pod "64388daf-4e84-4468-a9cb-484c0a4a8ab2" (UID: "64388daf-4e84-4468-a9cb-484c0a4a8ab2"). InnerVolumeSpecName "kube-api-access-p4mm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:31:11 crc kubenswrapper[4820]: I0203 12:31:11.989465 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-kube-api-access-6slrc" (OuterVolumeSpecName: "kube-api-access-6slrc") pod "06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb" (UID: "06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb"). InnerVolumeSpecName "kube-api-access-6slrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.007060 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-config" (OuterVolumeSpecName: "config") pod "06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb" (UID: "06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.040926 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb" (UID: "06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.070374 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64388daf-4e84-4468-a9cb-484c0a4a8ab2-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.070671 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64388daf-4e84-4468-a9cb-484c0a4a8ab2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.070761 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64388daf-4e84-4468-a9cb-484c0a4a8ab2-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.070902 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6slrc\" (UniqueName: \"kubernetes.io/projected/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-kube-api-access-6slrc\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.070989 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4mm6\" (UniqueName: \"kubernetes.io/projected/64388daf-4e84-4468-a9cb-484c0a4a8ab2-kube-api-access-p4mm6\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.071066 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.071133 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bb707be0-2a48-4886-894b-cc7554a1be6f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.071222 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.071297 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64388daf-4e84-4468-a9cb-484c0a4a8ab2-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.150086 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75b875c965-2f4nl" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.150081 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75b875c965-2f4nl" event={"ID":"64388daf-4e84-4468-a9cb-484c0a4a8ab2","Type":"ContainerDied","Data":"8da13125181d670cb1a47a12ced06442eeb20589d592f55f732d38b1dce6d0ae"} Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.152478 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c85bb46f7-qz2f2" event={"ID":"bb707be0-2a48-4886-894b-cc7554a1be6f","Type":"ContainerDied","Data":"e2561cf8b09da23cdd2bdea3431515c80c9178800bad09a88ad729d6fabe8c7a"} Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.152516 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c85bb46f7-qz2f2" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.153571 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65c5474c77-d2q2r" event={"ID":"4687f816-7024-4244-b049-d94441b1cef0","Type":"ContainerDied","Data":"2d764cdc247f5ac76fe7a9e979ffad16f363e157daa1f91ed7d8763300cb0a66"} Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.153585 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65c5474c77-d2q2r" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.156565 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lb2jr" event={"ID":"06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb","Type":"ContainerDied","Data":"61b2dcfae63a3cff65e172151526b84f759051337294f6749e1fe4da603c1bd3"} Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.156612 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b2dcfae63a3cff65e172151526b84f759051337294f6749e1fe4da603c1bd3" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.156671 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lb2jr" Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.253063 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65c5474c77-d2q2r"] Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.305343 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-65c5474c77-d2q2r"] Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.380434 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c85bb46f7-qz2f2"] Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.390860 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7c85bb46f7-qz2f2"] Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.415378 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75b875c965-2f4nl"] Feb 03 12:31:12 crc kubenswrapper[4820]: I0203 12:31:12.428170 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75b875c965-2f4nl"] Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.136663 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hmvhn"] Feb 03 12:31:13 crc kubenswrapper[4820]: E0203 12:31:13.137545 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" containerName="dnsmasq-dns" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.137576 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" containerName="dnsmasq-dns" Feb 03 12:31:13 crc kubenswrapper[4820]: E0203 12:31:13.137611 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb" containerName="neutron-db-sync" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.137620 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb" containerName="neutron-db-sync" Feb 03 12:31:13 crc kubenswrapper[4820]: E0203 12:31:13.137631 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" containerName="init" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.137640 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" containerName="init" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.137910 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb" containerName="neutron-db-sync" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.137937 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="60e67a4a-d840-4bc2-9f74-4a5fbb36a829" containerName="dnsmasq-dns" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.139722 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.183217 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4687f816-7024-4244-b049-d94441b1cef0" path="/var/lib/kubelet/pods/4687f816-7024-4244-b049-d94441b1cef0/volumes" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.184461 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64388daf-4e84-4468-a9cb-484c0a4a8ab2" path="/var/lib/kubelet/pods/64388daf-4e84-4468-a9cb-484c0a4a8ab2/volumes" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.185129 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb707be0-2a48-4886-894b-cc7554a1be6f" path="/var/lib/kubelet/pods/bb707be0-2a48-4886-894b-cc7554a1be6f/volumes" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.185699 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hmvhn"] Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.185745 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86c8ddbf74-xsj66"] Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.196316 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.423616 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-config\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.423820 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvnlr\" (UniqueName: \"kubernetes.io/projected/09cdfd30-121c-4d95-9a12-515eda5d3ba3-kube-api-access-qvnlr\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.423848 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.423913 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.423933 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-svc\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.423990 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.432848 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.433545 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.433844 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nn75w" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.449520 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.475630 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86c8ddbf74-xsj66"] Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.527039 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvnlr\" (UniqueName: \"kubernetes.io/projected/09cdfd30-121c-4d95-9a12-515eda5d3ba3-kube-api-access-qvnlr\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.527123 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.527191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkf2r\" (UniqueName: \"kubernetes.io/projected/a09e4336-af9d-4231-b744-1373af8ddfba-kube-api-access-zkf2r\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.527246 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-combined-ca-bundle\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.527301 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.527331 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-svc\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.527428 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.527539 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-ovndb-tls-certs\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.527666 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-config\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.527764 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-config\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.528221 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-httpd-config\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.529230 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.529609 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.529752 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-svc\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.529763 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.530005 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-config\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.555341 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvnlr\" (UniqueName: \"kubernetes.io/projected/09cdfd30-121c-4d95-9a12-515eda5d3ba3-kube-api-access-qvnlr\") pod \"dnsmasq-dns-55f844cf75-hmvhn\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.630583 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkf2r\" (UniqueName: \"kubernetes.io/projected/a09e4336-af9d-4231-b744-1373af8ddfba-kube-api-access-zkf2r\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.630635 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-combined-ca-bundle\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.630713 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-ovndb-tls-certs\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.630750 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-config\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.630846 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-httpd-config\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.636099 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-httpd-config\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.636397 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-config\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.637680 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-combined-ca-bundle\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.638296 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-ovndb-tls-certs\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.657647 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkf2r\" (UniqueName: \"kubernetes.io/projected/a09e4336-af9d-4231-b744-1373af8ddfba-kube-api-access-zkf2r\") pod \"neutron-86c8ddbf74-xsj66\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.768914 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:13 crc kubenswrapper[4820]: I0203 12:31:13.779867 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:16 crc kubenswrapper[4820]: E0203 12:31:16.211983 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 03 12:31:16 crc kubenswrapper[4820]: E0203 12:31:16.212782 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrjrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-b4rms_openstack(4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:31:16 crc kubenswrapper[4820]: E0203 12:31:16.214286 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-b4rms" podUID="4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0" Feb 03 12:31:16 crc kubenswrapper[4820]: E0203 12:31:16.801991 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified" Feb 03 12:31:16 crc kubenswrapper[4820]: E0203 12:31:16.802498 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncbh5c5h94h69h5d6h5d8hfchd6h78h8dh564h7fhcfh64fh54h98h5dh68dh546h64dh59h5b7h5f4h88h56ch689h56bh674h664h5fdhf6hfq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fftdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e2cc54f2-167c-4c79-b616-2e1cd122fed2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:31:16 crc kubenswrapper[4820]: I0203 12:31:16.811305 4820 scope.go:117] "RemoveContainer" containerID="247e3c9da2b66de7933df2e610563093cec9ab654304f0bbf0826f3a6039c4e8" Feb 03 12:31:16 crc kubenswrapper[4820]: I0203 12:31:16.830835 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"38fcd454-4d58-42a9-beb1-d8640ab7a9a7","Type":"ContainerDied","Data":"c674587267136284603eaf6c68dd4cefb215b289c038866ed6b4e1f2b5173adf"} Feb 03 12:31:16 crc kubenswrapper[4820]: I0203 12:31:16.830917 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c674587267136284603eaf6c68dd4cefb215b289c038866ed6b4e1f2b5173adf" Feb 03 12:31:16 crc kubenswrapper[4820]: E0203 12:31:16.875633 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-b4rms" podUID="4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0" Feb 03 12:31:16 crc kubenswrapper[4820]: I0203 12:31:16.925273 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.012752 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-combined-ca-bundle\") pod \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.015608 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-logs\") pod \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.015682 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-config-data\") pod \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.015740 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfk7j\" (UniqueName: \"kubernetes.io/projected/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-kube-api-access-cfk7j\") pod \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.015844 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-custom-prometheus-ca\") pod \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\" (UID: \"38fcd454-4d58-42a9-beb1-d8640ab7a9a7\") " Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.016335 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-logs" (OuterVolumeSpecName: "logs") pod "38fcd454-4d58-42a9-beb1-d8640ab7a9a7" (UID: "38fcd454-4d58-42a9-beb1-d8640ab7a9a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.017055 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.049694 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-kube-api-access-cfk7j" (OuterVolumeSpecName: "kube-api-access-cfk7j") pod "38fcd454-4d58-42a9-beb1-d8640ab7a9a7" (UID: "38fcd454-4d58-42a9-beb1-d8640ab7a9a7"). InnerVolumeSpecName "kube-api-access-cfk7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.498708 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfk7j\" (UniqueName: \"kubernetes.io/projected/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-kube-api-access-cfk7j\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.525767 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38fcd454-4d58-42a9-beb1-d8640ab7a9a7" (UID: "38fcd454-4d58-42a9-beb1-d8640ab7a9a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.530345 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "38fcd454-4d58-42a9-beb1-d8640ab7a9a7" (UID: "38fcd454-4d58-42a9-beb1-d8640ab7a9a7"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.570681 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-config-data" (OuterVolumeSpecName: "config-data") pod "38fcd454-4d58-42a9-beb1-d8640ab7a9a7" (UID: "38fcd454-4d58-42a9-beb1-d8640ab7a9a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.601912 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.601973 4820 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.601989 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fcd454-4d58-42a9-beb1-d8640ab7a9a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.887196 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 03 12:31:17 crc kubenswrapper[4820]: E0203 12:31:17.900165 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38fcd454_4d58_42a9_beb1_d8640ab7a9a7.slice/crio-c674587267136284603eaf6c68dd4cefb215b289c038866ed6b4e1f2b5173adf\": RecentStats: unable to find data in memory cache]" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.917535 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.939177 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.967118 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 03 12:31:17 crc kubenswrapper[4820]: E0203 12:31:17.967715 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api-log" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.967729 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api-log" Feb 03 12:31:17 crc kubenswrapper[4820]: E0203 12:31:17.967746 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.967752 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.968014 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.968049 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api-log" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.969281 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.972223 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.972483 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.973992 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.983233 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.984179 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.984254 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k565g\" (UniqueName: \"kubernetes.io/projected/7cd4de1e-997d-4df1-9ad5-2049937ab135-kube-api-access-k565g\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.984480 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-config-data\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.984568 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd4de1e-997d-4df1-9ad5-2049937ab135-logs\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.984663 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.984689 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:17 crc kubenswrapper[4820]: I0203 12:31:17.985398 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t4pzw"] Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.007067 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.093769 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.093828 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.093911 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.093949 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.093977 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k565g\" (UniqueName: \"kubernetes.io/projected/7cd4de1e-997d-4df1-9ad5-2049937ab135-kube-api-access-k565g\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.094042 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-config-data\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.094082 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd4de1e-997d-4df1-9ad5-2049937ab135-logs\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.094744 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd4de1e-997d-4df1-9ad5-2049937ab135-logs\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.108079 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-config-data\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.108249 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.108733 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.109649 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.116148 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k565g\" (UniqueName: \"kubernetes.io/projected/7cd4de1e-997d-4df1-9ad5-2049937ab135-kube-api-access-k565g\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.116700 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7cd4de1e-997d-4df1-9ad5-2049937ab135-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7cd4de1e-997d-4df1-9ad5-2049937ab135\") " pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.307300 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.340620 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8ff956445-pzzpk"] Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.344815 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.367419 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.367603 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.368035 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8ff956445-pzzpk"] Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.492452 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hmvhn"] Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.503610 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-internal-tls-certs\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.503990 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-ovndb-tls-certs\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.504066 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-public-tls-certs\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.504104 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-httpd-config\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.504215 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-config\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.504256 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqz5x\" (UniqueName: \"kubernetes.io/projected/156cf9db-e6bb-486e-b3b5-e72d4f99e684-kube-api-access-rqz5x\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.504342 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-combined-ca-bundle\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: W0203 12:31:18.505457 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09cdfd30_121c_4d95_9a12_515eda5d3ba3.slice/crio-0ba75ed4e3a00a5013be5595a1e32aa89c0cc94ed25311dbc535b976aeb5e2ec WatchSource:0}: Error finding container 0ba75ed4e3a00a5013be5595a1e32aa89c0cc94ed25311dbc535b976aeb5e2ec: Status 404 returned error can't find the container with id 0ba75ed4e3a00a5013be5595a1e32aa89c0cc94ed25311dbc535b976aeb5e2ec Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.609286 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-config\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.609370 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqz5x\" (UniqueName: \"kubernetes.io/projected/156cf9db-e6bb-486e-b3b5-e72d4f99e684-kube-api-access-rqz5x\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.609471 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-combined-ca-bundle\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.609559 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-internal-tls-certs\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.609589 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-ovndb-tls-certs\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.609649 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-public-tls-certs\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.609684 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-httpd-config\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.617092 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-config\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.626128 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-public-tls-certs\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.629771 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-combined-ca-bundle\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.634582 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-ovndb-tls-certs\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.643759 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-httpd-config\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.666474 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-internal-tls-certs\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.692652 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqz5x\" (UniqueName: \"kubernetes.io/projected/156cf9db-e6bb-486e-b3b5-e72d4f99e684-kube-api-access-rqz5x\") pod \"neutron-8ff956445-pzzpk\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.729953 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.828224 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86c8ddbf74-xsj66"] Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.937664 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edffb607-1bfe-4aa0-a39a-2f65dbd5077b","Type":"ContainerStarted","Data":"0c62c8d40e2374efec58a1d6200883c3886262bc971d0dddd6c3071627c32404"} Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.942091 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b4df5bdd-tdb9h" event={"ID":"308562dd-6078-4c1c-a4e0-c01a60a2d81d","Type":"ContainerStarted","Data":"7bedb750f024810451533efbf25b920a0f4ca140ee2ebfa47692437c76dc2542"} Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.945591 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t4pzw" event={"ID":"d6da87e1-3451-48c6-b2ad-368bf3139a57","Type":"ContainerStarted","Data":"6af213d8f362ef87845889f66d36bd8215aa0184e66434cb42d8b8540181c65b"} Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.945644 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t4pzw" event={"ID":"d6da87e1-3451-48c6-b2ad-368bf3139a57","Type":"ContainerStarted","Data":"cad7f5e9e402becf9a4b98092b675964479ac8a906e7d74a281b6c1069d45259"} Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.949137 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"457bfab7-1523-4ef8-b7f1-a6d0d54351e4","Type":"ContainerStarted","Data":"99cc1a5e327a494a909e2431d2d79b83ea5e05bb06dacc098d8eee0beae4562d"} Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.953188 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"6ed16a73-0e39-4ac4-bd01-820e6a7a45b0","Type":"ContainerStarted","Data":"56018e66d83607544161f0ff0ec0dd0d50f79a8e1277be5c976fb17f3c8532db"} Feb 03 12:31:18 crc kubenswrapper[4820]: I0203 12:31:18.963677 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" event={"ID":"09cdfd30-121c-4d95-9a12-515eda5d3ba3","Type":"ContainerStarted","Data":"0ba75ed4e3a00a5013be5595a1e32aa89c0cc94ed25311dbc535b976aeb5e2ec"} Feb 03 12:31:19 crc kubenswrapper[4820]: I0203 12:31:18.984449 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-t4pzw" podStartSLOduration=20.9844007 podStartE2EDuration="20.9844007s" podCreationTimestamp="2026-02-03 12:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:31:18.970509985 +0000 UTC m=+1596.493585859" watchObservedRunningTime="2026-02-03 12:31:18.9844007 +0000 UTC m=+1596.507502944" Feb 03 12:31:19 crc kubenswrapper[4820]: I0203 12:31:18.987632 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c8ddbf74-xsj66" event={"ID":"a09e4336-af9d-4231-b744-1373af8ddfba","Type":"ContainerStarted","Data":"69fa5e1d7ca61f93515e493577c9b40364231ce7157c94ffb4e22f8f09cc0248"} Feb 03 12:31:19 crc kubenswrapper[4820]: I0203 12:31:19.008759 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=11.588612223 podStartE2EDuration="59.008735846s" podCreationTimestamp="2026-02-03 12:30:20 +0000 UTC" firstStartedPulling="2026-02-03 12:30:24.178091579 +0000 UTC m=+1541.701167443" lastFinishedPulling="2026-02-03 12:31:11.598215202 +0000 UTC m=+1589.121291066" observedRunningTime="2026-02-03 12:31:19.000165855 +0000 UTC m=+1596.523241739" watchObservedRunningTime="2026-02-03 12:31:19.008735846 +0000 UTC m=+1596.531811720" Feb 03 12:31:19 crc kubenswrapper[4820]: I0203 12:31:19.118928 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 03 12:31:19 crc kubenswrapper[4820]: I0203 12:31:19.174220 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" path="/var/lib/kubelet/pods/38fcd454-4d58-42a9-beb1-d8640ab7a9a7/volumes" Feb 03 12:31:19 crc kubenswrapper[4820]: W0203 12:31:19.206790 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cd4de1e_997d_4df1_9ad5_2049937ab135.slice/crio-e75f9b07f8e626cf2bbab19aa6136f906623df3595571d33f999cfcf62d36045 WatchSource:0}: Error finding container e75f9b07f8e626cf2bbab19aa6136f906623df3595571d33f999cfcf62d36045: Status 404 returned error can't find the container with id e75f9b07f8e626cf2bbab19aa6136f906623df3595571d33f999cfcf62d36045 Feb 03 12:31:19 crc kubenswrapper[4820]: I0203 12:31:19.559792 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8ff956445-pzzpk"] Feb 03 12:31:20 crc kubenswrapper[4820]: I0203 12:31:20.021911 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8ff956445-pzzpk" event={"ID":"156cf9db-e6bb-486e-b3b5-e72d4f99e684","Type":"ContainerStarted","Data":"a3ee154e29360350211342ac6f930ab89dadcf3959e8c6f158c9b81400f57034"} Feb 03 12:31:20 crc kubenswrapper[4820]: I0203 12:31:20.058931 4820 generic.go:334] "Generic (PLEG): container finished" podID="09cdfd30-121c-4d95-9a12-515eda5d3ba3" containerID="0b6ab62c7e4f3f1e72035ba2efe6dd41845452b2781b1fcae30d6cc43eb978ab" exitCode=0 Feb 03 12:31:20 crc kubenswrapper[4820]: I0203 12:31:20.059055 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" event={"ID":"09cdfd30-121c-4d95-9a12-515eda5d3ba3","Type":"ContainerDied","Data":"0b6ab62c7e4f3f1e72035ba2efe6dd41845452b2781b1fcae30d6cc43eb978ab"} Feb 03 12:31:20 crc kubenswrapper[4820]: I0203 12:31:20.080467 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c8ddbf74-xsj66" event={"ID":"a09e4336-af9d-4231-b744-1373af8ddfba","Type":"ContainerStarted","Data":"5730a5f27ede87ff81d58383f5a5c3644ae00f7510ba2b6e5765006eddf383e2"} Feb 03 12:31:20 crc kubenswrapper[4820]: I0203 12:31:20.112025 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdc8588b4-jtjr8" event={"ID":"17c371f7-f032-4444-8d4b-1183a224c7b0","Type":"ContainerStarted","Data":"6e25521b0d495326fa22bb05386fb22e76c170fdbbec9bdbeb0b2eb340a1829a"} Feb 03 12:31:20 crc kubenswrapper[4820]: I0203 12:31:20.218395 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xsjm7" event={"ID":"f4116aff-b63f-47f1-b4bd-5bde84226d87","Type":"ContainerStarted","Data":"b9c654f89c5faf8645b86bb21d48eed1f5fc4a23ad64e36037645d99f54d6462"} Feb 03 12:31:20 crc kubenswrapper[4820]: I0203 12:31:20.252324 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b4df5bdd-tdb9h" event={"ID":"308562dd-6078-4c1c-a4e0-c01a60a2d81d","Type":"ContainerStarted","Data":"a17b9aafc2fe0ed01eea3ac2324d99cc7383038c9d824a7384a0dcac0217d20f"} Feb 03 12:31:20 crc kubenswrapper[4820]: I0203 12:31:20.267786 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xsjm7" podStartSLOduration=5.517282907 podStartE2EDuration="1m9.267755875s" podCreationTimestamp="2026-02-03 12:30:11 +0000 UTC" firstStartedPulling="2026-02-03 12:30:14.289690863 +0000 UTC m=+1531.812766727" lastFinishedPulling="2026-02-03 12:31:18.040163831 +0000 UTC m=+1595.563239695" observedRunningTime="2026-02-03 12:31:20.241566537 +0000 UTC m=+1597.764642411" watchObservedRunningTime="2026-02-03 12:31:20.267755875 +0000 UTC m=+1597.790831739" Feb 03 12:31:20 crc kubenswrapper[4820]: I0203 12:31:20.279780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7cd4de1e-997d-4df1-9ad5-2049937ab135","Type":"ContainerStarted","Data":"e75f9b07f8e626cf2bbab19aa6136f906623df3595571d33f999cfcf62d36045"} Feb 03 12:31:20 crc kubenswrapper[4820]: I0203 12:31:20.288716 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68b4df5bdd-tdb9h" podStartSLOduration=5.967264052 podStartE2EDuration="58.28869142s" podCreationTimestamp="2026-02-03 12:30:22 +0000 UTC" firstStartedPulling="2026-02-03 12:30:25.586681577 +0000 UTC m=+1543.109757441" lastFinishedPulling="2026-02-03 12:31:17.908108945 +0000 UTC m=+1595.431184809" observedRunningTime="2026-02-03 12:31:20.282198975 +0000 UTC m=+1597.805274849" watchObservedRunningTime="2026-02-03 12:31:20.28869142 +0000 UTC m=+1597.811767284" Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.244187 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.253827 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="38fcd454-4d58-42a9-beb1-d8640ab7a9a7" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.319724 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"457bfab7-1523-4ef8-b7f1-a6d0d54351e4","Type":"ContainerStarted","Data":"13abe95444ae1d3f04630b66ccf57bc5376461c6a05fb41beb32df66659ee407"} Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.323072 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdc8588b4-jtjr8" event={"ID":"17c371f7-f032-4444-8d4b-1183a224c7b0","Type":"ContainerStarted","Data":"c22975ba3ab084f0050a4631f4a0020a8a20e782596e29e66b6e36290ae66cee"} Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.339412 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edffb607-1bfe-4aa0-a39a-2f65dbd5077b","Type":"ContainerStarted","Data":"f57517145c69ff58a37429d1d6da6b1320da910fa8eec9a97f24e58f0c83b1bd"} Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.347187 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7cd4de1e-997d-4df1-9ad5-2049937ab135","Type":"ContainerStarted","Data":"63a2d00b75da184006256be4faf0ebd14faa99fe71717dff28a055c1dd9726d8"} Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.347247 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7cd4de1e-997d-4df1-9ad5-2049937ab135","Type":"ContainerStarted","Data":"90f11a74aa87dd6f9722e9aa4ffcf5d633820c7997d09b8eabdf2f95d87efd23"} Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.347833 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.365214 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8ff956445-pzzpk" event={"ID":"156cf9db-e6bb-486e-b3b5-e72d4f99e684","Type":"ContainerStarted","Data":"07f53837417fd62c651b388f2bebfa14c05e93cfa77e736364a7637ed8644b12"} Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.365786 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8ff956445-pzzpk" event={"ID":"156cf9db-e6bb-486e-b3b5-e72d4f99e684","Type":"ContainerStarted","Data":"3e5e5ed8899e8013708b2eab378ba7fdc5527de4f0a8305f9da9e2f6237a1f91"} Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.365914 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.365544 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=55.365520249 podStartE2EDuration="55.365520249s" podCreationTimestamp="2026-02-03 12:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:31:21.359324202 +0000 UTC m=+1598.882400066" watchObservedRunningTime="2026-02-03 12:31:21.365520249 +0000 UTC m=+1598.888596113" Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.411007 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" event={"ID":"09cdfd30-121c-4d95-9a12-515eda5d3ba3","Type":"ContainerStarted","Data":"3c3f7a455f66a4864807db849da6c1d029e46ad404c854baa1cb1c0c7a26cfa9"} Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.412151 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.416411 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=55.416382352 podStartE2EDuration="55.416382352s" podCreationTimestamp="2026-02-03 12:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:31:21.408473229 +0000 UTC m=+1598.931549113" watchObservedRunningTime="2026-02-03 12:31:21.416382352 +0000 UTC m=+1598.939458216" Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.442289 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.442265762 podStartE2EDuration="4.442265762s" podCreationTimestamp="2026-02-03 12:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:31:21.441263644 +0000 UTC m=+1598.964339508" watchObservedRunningTime="2026-02-03 12:31:21.442265762 +0000 UTC m=+1598.965341626" Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.454604 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c8ddbf74-xsj66" event={"ID":"a09e4336-af9d-4231-b744-1373af8ddfba","Type":"ContainerStarted","Data":"8584dcb6e35d2654738d30b489990524fc48d4394b4f6fa96273c3d2fe56ca13"} Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.454658 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.929339 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.929382 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 03 12:31:21 crc kubenswrapper[4820]: I0203 12:31:21.944566 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5fdc8588b4-jtjr8" podStartSLOduration=7.406890798 podStartE2EDuration="59.944537975s" podCreationTimestamp="2026-02-03 12:30:22 +0000 UTC" firstStartedPulling="2026-02-03 12:30:25.494477807 +0000 UTC m=+1543.017553671" lastFinishedPulling="2026-02-03 12:31:18.032124954 +0000 UTC m=+1595.555200848" observedRunningTime="2026-02-03 12:31:21.476387363 +0000 UTC m=+1598.999463227" watchObservedRunningTime="2026-02-03 12:31:21.944537975 +0000 UTC m=+1599.467613839" Feb 03 12:31:22 crc kubenswrapper[4820]: I0203 12:31:21.979863 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" podStartSLOduration=8.979839348 podStartE2EDuration="8.979839348s" podCreationTimestamp="2026-02-03 12:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:31:21.520037821 +0000 UTC m=+1599.043113685" watchObservedRunningTime="2026-02-03 12:31:21.979839348 +0000 UTC m=+1599.502915212" Feb 03 12:31:22 crc kubenswrapper[4820]: I0203 12:31:22.093571 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 03 12:31:22 crc kubenswrapper[4820]: I0203 12:31:22.359520 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86c8ddbf74-xsj66" podStartSLOduration=9.35949659 podStartE2EDuration="9.35949659s" podCreationTimestamp="2026-02-03 12:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:31:21.977254718 +0000 UTC m=+1599.500330582" watchObservedRunningTime="2026-02-03 12:31:22.35949659 +0000 UTC m=+1599.882572454" Feb 03 12:31:22 crc kubenswrapper[4820]: I0203 12:31:22.368083 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8ff956445-pzzpk" podStartSLOduration=4.368061101 podStartE2EDuration="4.368061101s" podCreationTimestamp="2026-02-03 12:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:31:22.068936634 +0000 UTC m=+1599.592012498" watchObservedRunningTime="2026-02-03 12:31:22.368061101 +0000 UTC m=+1599.891136965" Feb 03 12:31:22 crc kubenswrapper[4820]: I0203 12:31:22.843749 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 03 12:31:23 crc kubenswrapper[4820]: I0203 12:31:23.127767 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:31:23 crc kubenswrapper[4820]: I0203 12:31:23.127849 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:31:23 crc kubenswrapper[4820]: I0203 12:31:23.615497 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 03 12:31:23 crc kubenswrapper[4820]: I0203 12:31:23.621879 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:31:23 crc kubenswrapper[4820]: I0203 12:31:23.622498 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:31:23 crc kubenswrapper[4820]: I0203 12:31:23.624990 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:31:23 crc kubenswrapper[4820]: E0203 12:31:23.628632 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-9csj4" podUID="470b8f27-2959-4890-aed3-361530b83b73" Feb 03 12:31:26 crc kubenswrapper[4820]: I0203 12:31:26.379902 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="7cd4de1e-997d-4df1-9ad5-2049937ab135" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.172:9322/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:31:26 crc kubenswrapper[4820]: E0203 12:31:26.382442 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-decision-engine:watcher_latest\\\"\"" pod="openstack/watcher-decision-engine-0" podUID="cd46da3e-bb82-4990-8d29-03f53c601f36" Feb 03 12:31:27 crc kubenswrapper[4820]: I0203 12:31:27.345484 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 12:31:27 crc kubenswrapper[4820]: I0203 12:31:27.345526 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 12:31:27 crc kubenswrapper[4820]: I0203 12:31:27.345536 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 12:31:27 crc kubenswrapper[4820]: I0203 12:31:27.345544 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 12:31:27 crc kubenswrapper[4820]: I0203 12:31:27.345556 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 12:31:27 crc kubenswrapper[4820]: I0203 12:31:27.345569 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 12:31:27 crc kubenswrapper[4820]: I0203 12:31:27.345585 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 12:31:27 crc kubenswrapper[4820]: I0203 12:31:27.345604 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 12:31:27 crc kubenswrapper[4820]: I0203 12:31:27.424255 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 12:31:27 crc kubenswrapper[4820]: I0203 12:31:27.435878 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 12:31:27 crc kubenswrapper[4820]: I0203 12:31:27.437971 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 12:31:27 crc kubenswrapper[4820]: I0203 12:31:27.549369 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 12:31:28 crc kubenswrapper[4820]: I0203 12:31:28.308794 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 03 12:31:28 crc kubenswrapper[4820]: I0203 12:31:28.964198 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:31:28 crc kubenswrapper[4820]: I0203 12:31:28.978239 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/watcher-api-0" podUID="7cd4de1e-997d-4df1-9ad5-2049937ab135" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.172:9322/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:31:29 crc kubenswrapper[4820]: I0203 12:31:29.206111 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d5vcp"] Feb 03 12:31:29 crc kubenswrapper[4820]: I0203 12:31:29.206579 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" podUID="5f5c5f87-b592-4f5d-86bc-3069985ae61a" containerName="dnsmasq-dns" containerID="cri-o://60c25f427d255d56940c30e3cf98834c61454d39357bd327a50e1df367b5536f" gracePeriod=10 Feb 03 12:31:29 crc kubenswrapper[4820]: I0203 12:31:29.313173 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/watcher-api-0" podUID="7cd4de1e-997d-4df1-9ad5-2049937ab135" containerName="watcher-api-log" probeResult="failure" output="Get \"https://10.217.0.172:9322/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:31:30 crc kubenswrapper[4820]: I0203 12:31:30.112828 4820 generic.go:334] "Generic (PLEG): container finished" podID="5f5c5f87-b592-4f5d-86bc-3069985ae61a" containerID="60c25f427d255d56940c30e3cf98834c61454d39357bd327a50e1df367b5536f" exitCode=0 Feb 03 12:31:30 crc kubenswrapper[4820]: I0203 12:31:30.112909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" event={"ID":"5f5c5f87-b592-4f5d-86bc-3069985ae61a","Type":"ContainerDied","Data":"60c25f427d255d56940c30e3cf98834c61454d39357bd327a50e1df367b5536f"} Feb 03 12:31:31 crc kubenswrapper[4820]: I0203 12:31:31.366364 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:31:31 crc kubenswrapper[4820]: I0203 12:31:31.367049 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:31:31 crc kubenswrapper[4820]: I0203 12:31:31.367136 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:31:31 crc kubenswrapper[4820]: I0203 12:31:31.368512 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:31:31 crc kubenswrapper[4820]: I0203 12:31:31.368614 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" gracePeriod=600 Feb 03 12:31:31 crc kubenswrapper[4820]: I0203 12:31:31.385390 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="7cd4de1e-997d-4df1-9ad5-2049937ab135" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.172:9322/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:31:32 crc kubenswrapper[4820]: I0203 12:31:32.143712 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 03 12:31:32 crc kubenswrapper[4820]: I0203 12:31:32.170220 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" exitCode=0 Feb 03 12:31:32 crc kubenswrapper[4820]: I0203 12:31:32.170344 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3"} Feb 03 12:31:32 crc kubenswrapper[4820]: I0203 12:31:32.170709 4820 scope.go:117] "RemoveContainer" containerID="f5b6fb38e3a772864bd8a30bd0acd2c8340ca496b3ae218013d45718e5286b56" Feb 03 12:31:33 crc kubenswrapper[4820]: I0203 12:31:33.250828 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:31:33 crc kubenswrapper[4820]: I0203 12:31:33.873672 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 03 12:31:34 crc kubenswrapper[4820]: I0203 12:31:34.581966 4820 generic.go:334] "Generic (PLEG): container finished" podID="d6da87e1-3451-48c6-b2ad-368bf3139a57" containerID="6af213d8f362ef87845889f66d36bd8215aa0184e66434cb42d8b8540181c65b" exitCode=0 Feb 03 12:31:34 crc kubenswrapper[4820]: I0203 12:31:34.582012 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t4pzw" event={"ID":"d6da87e1-3451-48c6-b2ad-368bf3139a57","Type":"ContainerDied","Data":"6af213d8f362ef87845889f66d36bd8215aa0184e66434cb42d8b8540181c65b"} Feb 03 12:31:35 crc kubenswrapper[4820]: I0203 12:31:35.230958 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 12:31:35 crc kubenswrapper[4820]: I0203 12:31:35.231143 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:31:35 crc kubenswrapper[4820]: I0203 12:31:35.240433 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 12:31:36 crc kubenswrapper[4820]: I0203 12:31:36.178554 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 12:31:36 crc kubenswrapper[4820]: I0203 12:31:36.179589 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:31:36 crc kubenswrapper[4820]: I0203 12:31:36.187078 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 12:31:36 crc kubenswrapper[4820]: I0203 12:31:36.921822 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qjmdv"] Feb 03 12:31:36 crc kubenswrapper[4820]: I0203 12:31:36.929543 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:31:36 crc kubenswrapper[4820]: I0203 12:31:36.974476 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qjmdv"] Feb 03 12:31:37 crc kubenswrapper[4820]: I0203 12:31:37.054208 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfzw8\" (UniqueName: \"kubernetes.io/projected/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-kube-api-access-hfzw8\") pod \"redhat-operators-qjmdv\" (UID: \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\") " pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:31:37 crc kubenswrapper[4820]: I0203 12:31:37.056083 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-catalog-content\") pod \"redhat-operators-qjmdv\" (UID: \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\") " pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:31:37 crc kubenswrapper[4820]: I0203 12:31:37.056365 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-utilities\") pod \"redhat-operators-qjmdv\" (UID: \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\") " pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:31:37 crc kubenswrapper[4820]: I0203 12:31:37.159144 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-utilities\") pod \"redhat-operators-qjmdv\" (UID: \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\") " pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:31:37 crc kubenswrapper[4820]: I0203 12:31:37.159373 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfzw8\" (UniqueName: \"kubernetes.io/projected/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-kube-api-access-hfzw8\") pod \"redhat-operators-qjmdv\" (UID: \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\") " pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:31:37 crc kubenswrapper[4820]: I0203 12:31:37.159540 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-catalog-content\") pod \"redhat-operators-qjmdv\" (UID: \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\") " pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:31:37 crc kubenswrapper[4820]: I0203 12:31:37.160299 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-catalog-content\") pod \"redhat-operators-qjmdv\" (UID: \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\") " pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:31:37 crc kubenswrapper[4820]: I0203 12:31:37.160646 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-utilities\") pod \"redhat-operators-qjmdv\" (UID: \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\") " pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:31:37 crc kubenswrapper[4820]: I0203 12:31:37.190738 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfzw8\" (UniqueName: \"kubernetes.io/projected/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-kube-api-access-hfzw8\") pod \"redhat-operators-qjmdv\" (UID: \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\") " pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:31:37 crc kubenswrapper[4820]: I0203 12:31:37.269782 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:31:37 crc kubenswrapper[4820]: I0203 12:31:37.779393 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" podUID="5f5c5f87-b592-4f5d-86bc-3069985ae61a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Feb 03 12:31:37 crc kubenswrapper[4820]: I0203 12:31:37.907857 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4116aff-b63f-47f1-b4bd-5bde84226d87" containerID="b9c654f89c5faf8645b86bb21d48eed1f5fc4a23ad64e36037645d99f54d6462" exitCode=0 Feb 03 12:31:37 crc kubenswrapper[4820]: I0203 12:31:37.907948 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xsjm7" event={"ID":"f4116aff-b63f-47f1-b4bd-5bde84226d87","Type":"ContainerDied","Data":"b9c654f89c5faf8645b86bb21d48eed1f5fc4a23ad64e36037645d99f54d6462"} Feb 03 12:31:38 crc kubenswrapper[4820]: I0203 12:31:38.730138 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 03 12:31:38 crc kubenswrapper[4820]: I0203 12:31:38.750570 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 03 12:31:42 crc kubenswrapper[4820]: I0203 12:31:42.780358 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" podUID="5f5c5f87-b592-4f5d-86bc-3069985ae61a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.150315 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:31:43 crc kubenswrapper[4820]: E0203 12:31:43.171575 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:31:43 crc kubenswrapper[4820]: E0203 12:31:43.214995 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Feb 03 12:31:43 crc kubenswrapper[4820]: E0203 12:31:43.215851 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fftdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e2cc54f2-167c-4c79-b616-2e1cd122fed2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.304308 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.323133 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xsjm7" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.350415 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t4pzw" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.352018 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t4pzw" event={"ID":"d6da87e1-3451-48c6-b2ad-368bf3139a57","Type":"ContainerDied","Data":"cad7f5e9e402becf9a4b98092b675964479ac8a906e7d74a281b6c1069d45259"} Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.352081 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cad7f5e9e402becf9a4b98092b675964479ac8a906e7d74a281b6c1069d45259" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.359462 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.360277 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" event={"ID":"5f5c5f87-b592-4f5d-86bc-3069985ae61a","Type":"ContainerDied","Data":"750c88b8ecd7a8dd72d10ae755ea665ce4fde9598e9c1761de2d40ec2ebcb182"} Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.381437 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xsjm7" event={"ID":"f4116aff-b63f-47f1-b4bd-5bde84226d87","Type":"ContainerDied","Data":"f2d80ef5637a1182c47131a8136d097402b7f9eefe9a06e2288396e47c61dd0e"} Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.381483 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xsjm7" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.381494 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2d80ef5637a1182c47131a8136d097402b7f9eefe9a06e2288396e47c61dd0e" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.382331 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.385394 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-combined-ca-bundle\") pod \"d6da87e1-3451-48c6-b2ad-368bf3139a57\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.385522 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8jhw\" (UniqueName: \"kubernetes.io/projected/d6da87e1-3451-48c6-b2ad-368bf3139a57-kube-api-access-k8jhw\") pod \"d6da87e1-3451-48c6-b2ad-368bf3139a57\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.385628 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-scripts\") pod \"d6da87e1-3451-48c6-b2ad-368bf3139a57\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.385699 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-credential-keys\") pod \"d6da87e1-3451-48c6-b2ad-368bf3139a57\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.385749 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-config-data\") pod \"d6da87e1-3451-48c6-b2ad-368bf3139a57\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.385797 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-fernet-keys\") pod \"d6da87e1-3451-48c6-b2ad-368bf3139a57\" (UID: \"d6da87e1-3451-48c6-b2ad-368bf3139a57\") " Feb 03 12:31:43 crc kubenswrapper[4820]: E0203 12:31:43.387372 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.439236 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-scripts" (OuterVolumeSpecName: "scripts") pod "d6da87e1-3451-48c6-b2ad-368bf3139a57" (UID: "d6da87e1-3451-48c6-b2ad-368bf3139a57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.444181 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6da87e1-3451-48c6-b2ad-368bf3139a57-kube-api-access-k8jhw" (OuterVolumeSpecName: "kube-api-access-k8jhw") pod "d6da87e1-3451-48c6-b2ad-368bf3139a57" (UID: "d6da87e1-3451-48c6-b2ad-368bf3139a57"). InnerVolumeSpecName "kube-api-access-k8jhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.447212 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d6da87e1-3451-48c6-b2ad-368bf3139a57" (UID: "d6da87e1-3451-48c6-b2ad-368bf3139a57"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.456742 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d6da87e1-3451-48c6-b2ad-368bf3139a57" (UID: "d6da87e1-3451-48c6-b2ad-368bf3139a57"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.472310 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6da87e1-3451-48c6-b2ad-368bf3139a57" (UID: "d6da87e1-3451-48c6-b2ad-368bf3139a57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.484025 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-config-data" (OuterVolumeSpecName: "config-data") pod "d6da87e1-3451-48c6-b2ad-368bf3139a57" (UID: "d6da87e1-3451-48c6-b2ad-368bf3139a57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.490695 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-ovsdbserver-sb\") pod \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.490741 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhp8l\" (UniqueName: \"kubernetes.io/projected/f4116aff-b63f-47f1-b4bd-5bde84226d87-kube-api-access-hhp8l\") pod \"f4116aff-b63f-47f1-b4bd-5bde84226d87\" (UID: \"f4116aff-b63f-47f1-b4bd-5bde84226d87\") " Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.490803 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-ovsdbserver-nb\") pod \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.490837 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4116aff-b63f-47f1-b4bd-5bde84226d87-combined-ca-bundle\") pod \"f4116aff-b63f-47f1-b4bd-5bde84226d87\" (UID: \"f4116aff-b63f-47f1-b4bd-5bde84226d87\") " Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.490983 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k59s\" (UniqueName: \"kubernetes.io/projected/5f5c5f87-b592-4f5d-86bc-3069985ae61a-kube-api-access-5k59s\") pod \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.491037 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-config\") pod \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.491104 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4116aff-b63f-47f1-b4bd-5bde84226d87-db-sync-config-data\") pod \"f4116aff-b63f-47f1-b4bd-5bde84226d87\" (UID: \"f4116aff-b63f-47f1-b4bd-5bde84226d87\") " Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.491183 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-dns-swift-storage-0\") pod \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.491229 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-dns-svc\") pod \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\" (UID: \"5f5c5f87-b592-4f5d-86bc-3069985ae61a\") " Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.492135 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.492175 4820 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.492187 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.492197 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.492207 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6da87e1-3451-48c6-b2ad-368bf3139a57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.492235 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8jhw\" (UniqueName: \"kubernetes.io/projected/d6da87e1-3451-48c6-b2ad-368bf3139a57-kube-api-access-k8jhw\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.505180 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4116aff-b63f-47f1-b4bd-5bde84226d87-kube-api-access-hhp8l" (OuterVolumeSpecName: "kube-api-access-hhp8l") pod "f4116aff-b63f-47f1-b4bd-5bde84226d87" (UID: "f4116aff-b63f-47f1-b4bd-5bde84226d87"). InnerVolumeSpecName "kube-api-access-hhp8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.545282 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4116aff-b63f-47f1-b4bd-5bde84226d87-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f4116aff-b63f-47f1-b4bd-5bde84226d87" (UID: "f4116aff-b63f-47f1-b4bd-5bde84226d87"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.545718 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5c5f87-b592-4f5d-86bc-3069985ae61a-kube-api-access-5k59s" (OuterVolumeSpecName: "kube-api-access-5k59s") pod "5f5c5f87-b592-4f5d-86bc-3069985ae61a" (UID: "5f5c5f87-b592-4f5d-86bc-3069985ae61a"). InnerVolumeSpecName "kube-api-access-5k59s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.589182 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4116aff-b63f-47f1-b4bd-5bde84226d87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4116aff-b63f-47f1-b4bd-5bde84226d87" (UID: "f4116aff-b63f-47f1-b4bd-5bde84226d87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.596377 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4116aff-b63f-47f1-b4bd-5bde84226d87-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.596416 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhp8l\" (UniqueName: \"kubernetes.io/projected/f4116aff-b63f-47f1-b4bd-5bde84226d87-kube-api-access-hhp8l\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.596429 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4116aff-b63f-47f1-b4bd-5bde84226d87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.596438 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k59s\" (UniqueName: \"kubernetes.io/projected/5f5c5f87-b592-4f5d-86bc-3069985ae61a-kube-api-access-5k59s\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.605863 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f5c5f87-b592-4f5d-86bc-3069985ae61a" (UID: "5f5c5f87-b592-4f5d-86bc-3069985ae61a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.613657 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f5c5f87-b592-4f5d-86bc-3069985ae61a" (UID: "5f5c5f87-b592-4f5d-86bc-3069985ae61a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.632178 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.632799 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5f5c5f87-b592-4f5d-86bc-3069985ae61a" (UID: "5f5c5f87-b592-4f5d-86bc-3069985ae61a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.641790 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-config" (OuterVolumeSpecName: "config") pod "5f5c5f87-b592-4f5d-86bc-3069985ae61a" (UID: "5f5c5f87-b592-4f5d-86bc-3069985ae61a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.654522 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f5c5f87-b592-4f5d-86bc-3069985ae61a" (UID: "5f5c5f87-b592-4f5d-86bc-3069985ae61a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.699027 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.699067 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.699080 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.699093 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.699105 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f5c5f87-b592-4f5d-86bc-3069985ae61a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.764736 4820 scope.go:117] "RemoveContainer" containerID="60c25f427d255d56940c30e3cf98834c61454d39357bd327a50e1df367b5536f" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.778444 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-86c8ddbf74-xsj66" podUID="a09e4336-af9d-4231-b744-1373af8ddfba" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.780059 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-86c8ddbf74-xsj66" podUID="a09e4336-af9d-4231-b744-1373af8ddfba" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.780285 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-86c8ddbf74-xsj66" podUID="a09e4336-af9d-4231-b744-1373af8ddfba" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 12:31:43 crc kubenswrapper[4820]: I0203 12:31:43.801145 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qjmdv"] Feb 03 12:31:44 crc kubenswrapper[4820]: W0203 12:31:44.024368 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad9b0bbe_7f17_4347_bda3_5f0a843b3997.slice/crio-6d177662b1accc95dfe78c3bd1a60e46e16351630635d404091eef6a5c6b5047 WatchSource:0}: Error finding container 6d177662b1accc95dfe78c3bd1a60e46e16351630635d404091eef6a5c6b5047: Status 404 returned error can't find the container with id 6d177662b1accc95dfe78c3bd1a60e46e16351630635d404091eef6a5c6b5047 Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.045438 4820 scope.go:117] "RemoveContainer" containerID="c9e01280b550b17d3a841e1154b3fff577135e72b1cb6b49445fd84ecc47fac5" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.409223 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.440586 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjmdv" event={"ID":"ad9b0bbe-7f17-4347-bda3-5f0a843b3997","Type":"ContainerStarted","Data":"6d177662b1accc95dfe78c3bd1a60e46e16351630635d404091eef6a5c6b5047"} Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.513811 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d5vcp"] Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.553676 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d5vcp"] Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.600486 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6ccd68b7f-9xjs9"] Feb 03 12:31:44 crc kubenswrapper[4820]: E0203 12:31:44.601353 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6da87e1-3451-48c6-b2ad-368bf3139a57" containerName="keystone-bootstrap" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.601506 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6da87e1-3451-48c6-b2ad-368bf3139a57" containerName="keystone-bootstrap" Feb 03 12:31:44 crc kubenswrapper[4820]: E0203 12:31:44.601618 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5c5f87-b592-4f5d-86bc-3069985ae61a" containerName="init" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.601701 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5c5f87-b592-4f5d-86bc-3069985ae61a" containerName="init" Feb 03 12:31:44 crc kubenswrapper[4820]: E0203 12:31:44.601796 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5c5f87-b592-4f5d-86bc-3069985ae61a" containerName="dnsmasq-dns" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.601881 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5c5f87-b592-4f5d-86bc-3069985ae61a" containerName="dnsmasq-dns" Feb 03 12:31:44 crc kubenswrapper[4820]: E0203 12:31:44.601989 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4116aff-b63f-47f1-b4bd-5bde84226d87" containerName="barbican-db-sync" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.602078 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4116aff-b63f-47f1-b4bd-5bde84226d87" containerName="barbican-db-sync" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.602411 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6da87e1-3451-48c6-b2ad-368bf3139a57" containerName="keystone-bootstrap" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.602507 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5c5f87-b592-4f5d-86bc-3069985ae61a" containerName="dnsmasq-dns" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.602585 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4116aff-b63f-47f1-b4bd-5bde84226d87" containerName="barbican-db-sync" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.603530 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.612414 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.612767 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rvkjd" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.613059 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.613245 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.613590 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.613762 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.623905 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6ccd68b7f-9xjs9"] Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.725423 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-internal-tls-certs\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.725558 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-public-tls-certs\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.725609 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-config-data\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.725668 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-credential-keys\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.725986 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-scripts\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.726057 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cr6d\" (UniqueName: \"kubernetes.io/projected/c5d266f2-257d-4f06-9237-b34d67b51245-kube-api-access-4cr6d\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.726106 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-fernet-keys\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.726174 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-combined-ca-bundle\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.780492 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-78bff7b94c-t49mw"] Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.791606 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.821486 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-78bff7b94c-t49mw"] Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.828087 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.828201 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.828455 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kddvd" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.843742 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-public-tls-certs\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.844102 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-config-data\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.844188 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-credential-keys\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.844309 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-scripts\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.844352 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cr6d\" (UniqueName: \"kubernetes.io/projected/c5d266f2-257d-4f06-9237-b34d67b51245-kube-api-access-4cr6d\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.844384 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-fernet-keys\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.844423 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-combined-ca-bundle\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.844461 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-internal-tls-certs\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.934208 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-combined-ca-bundle\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.951427 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-config-data\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.951880 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-combined-ca-bundle\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.951969 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-logs\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.952174 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x6xm\" (UniqueName: \"kubernetes.io/projected/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-kube-api-access-9x6xm\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.952267 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-config-data-custom\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.960398 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-config-data\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.960492 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b"] Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.960572 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-credential-keys\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.966240 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cr6d\" (UniqueName: \"kubernetes.io/projected/c5d266f2-257d-4f06-9237-b34d67b51245-kube-api-access-4cr6d\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.974694 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-public-tls-certs\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.975287 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-fernet-keys\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.975682 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-internal-tls-certs\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.975932 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d266f2-257d-4f06-9237-b34d67b51245-scripts\") pod \"keystone-6ccd68b7f-9xjs9\" (UID: \"c5d266f2-257d-4f06-9237-b34d67b51245\") " pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:44 crc kubenswrapper[4820]: I0203 12:31:44.977013 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.005813 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.015505 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.022469 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b"] Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.105526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-combined-ca-bundle\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.105607 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-logs\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.105693 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x6xm\" (UniqueName: \"kubernetes.io/projected/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-kube-api-access-9x6xm\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.105721 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-config-data-custom\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.105744 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-config-data\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.116106 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-logs\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.132795 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-config-data-custom\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.153828 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x6xm\" (UniqueName: \"kubernetes.io/projected/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-kube-api-access-9x6xm\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.159931 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-combined-ca-bundle\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.199210 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-config-data\") pod \"barbican-worker-78bff7b94c-t49mw\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.201240 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f5c5f87-b592-4f5d-86bc-3069985ae61a" path="/var/lib/kubelet/pods/5f5c5f87-b592-4f5d-86bc-3069985ae61a/volumes" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.208402 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-config-data\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.208470 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z2rb\" (UniqueName: \"kubernetes.io/projected/14f188d1-7883-471d-9564-01f405548b98-kube-api-access-6z2rb\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.208509 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-combined-ca-bundle\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.208576 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-config-data-custom\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.208622 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f188d1-7883-471d-9564-01f405548b98-logs\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.214143 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-dxczn"] Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.216349 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.269425 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-dxczn"] Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.297968 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.300442 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-9dfbf858-g4qlm"] Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.302305 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.311807 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.312595 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-dns-svc\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.312676 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.312749 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-config-data\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.312789 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2rb\" (UniqueName: \"kubernetes.io/projected/14f188d1-7883-471d-9564-01f405548b98-kube-api-access-6z2rb\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.312816 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-combined-ca-bundle\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.312850 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.312913 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.312942 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-config\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.312968 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-config-data-custom\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.312991 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c22gj\" (UniqueName: \"kubernetes.io/projected/66a7d0be-3243-4744-898c-b87b5f91c620-kube-api-access-c22gj\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.313031 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f188d1-7883-471d-9564-01f405548b98-logs\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.313527 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f188d1-7883-471d-9564-01f405548b98-logs\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.317734 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9dfbf858-g4qlm"] Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.327603 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-config-data\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.329643 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-combined-ca-bundle\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.366558 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-config-data-custom\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.378199 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2rb\" (UniqueName: \"kubernetes.io/projected/14f188d1-7883-471d-9564-01f405548b98-kube-api-access-6z2rb\") pod \"barbican-keystone-listener-bfcd7c7c4-s6j9b\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.416467 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.416526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-config\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.416569 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c22gj\" (UniqueName: \"kubernetes.io/projected/66a7d0be-3243-4744-898c-b87b5f91c620-kube-api-access-c22gj\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.416658 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-dns-svc\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.416693 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-config-data-custom\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.416720 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ae9703-401c-41f0-8316-9b485f9d0b29-logs\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.416767 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.416805 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-combined-ca-bundle\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.416843 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmmx8\" (UniqueName: \"kubernetes.io/projected/71ae9703-401c-41f0-8316-9b485f9d0b29-kube-api-access-pmmx8\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.416878 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-config-data\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.416964 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.418293 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.419962 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-config\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.420991 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-dns-svc\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.421640 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.422224 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.440806 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.463395 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-775b8c5454-c9g7t"] Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.465655 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.492659 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c22gj\" (UniqueName: \"kubernetes.io/projected/66a7d0be-3243-4744-898c-b87b5f91c620-kube-api-access-c22gj\") pod \"dnsmasq-dns-85ff748b95-dxczn\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.503129 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-775b8c5454-c9g7t"] Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.503222 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.515317 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9csj4" event={"ID":"470b8f27-2959-4890-aed3-361530b83b73","Type":"ContainerStarted","Data":"10efacc21caf698fb5a3a65a239aca041e4d8cd493b7d1a84de2d1c346e3e9a8"} Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.518880 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h864q\" (UniqueName: \"kubernetes.io/projected/86a0d38b-74e6-4528-9dae-af9c8400555d-kube-api-access-h864q\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.518969 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a0d38b-74e6-4528-9dae-af9c8400555d-combined-ca-bundle\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.519055 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a0d38b-74e6-4528-9dae-af9c8400555d-config-data\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.519111 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-config-data-custom\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.519143 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ae9703-401c-41f0-8316-9b485f9d0b29-logs\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.519192 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86a0d38b-74e6-4528-9dae-af9c8400555d-config-data-custom\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.519238 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-combined-ca-bundle\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.519263 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmmx8\" (UniqueName: \"kubernetes.io/projected/71ae9703-401c-41f0-8316-9b485f9d0b29-kube-api-access-pmmx8\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.519297 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-config-data\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.519399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86a0d38b-74e6-4528-9dae-af9c8400555d-logs\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.526761 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ae9703-401c-41f0-8316-9b485f9d0b29-logs\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.527627 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-config-data-custom\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.539820 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-combined-ca-bundle\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.541089 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-config-data\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.572443 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"cd46da3e-bb82-4990-8d29-03f53c601f36","Type":"ContainerStarted","Data":"76b337ba9889fcd2d04101da32a8144cfa4c5201045390fad76182170e8b3ad0"} Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.574599 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmmx8\" (UniqueName: \"kubernetes.io/projected/71ae9703-401c-41f0-8316-9b485f9d0b29-kube-api-access-pmmx8\") pod \"barbican-api-9dfbf858-g4qlm\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.595305 4820 generic.go:334] "Generic (PLEG): container finished" podID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerID="e52c95d02c9ad3f15bfa6263310451fe9e8dbe757b14fe47f62b77c9236d90a2" exitCode=0 Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.595665 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjmdv" event={"ID":"ad9b0bbe-7f17-4347-bda3-5f0a843b3997","Type":"ContainerDied","Data":"e52c95d02c9ad3f15bfa6263310451fe9e8dbe757b14fe47f62b77c9236d90a2"} Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.621513 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h864q\" (UniqueName: \"kubernetes.io/projected/86a0d38b-74e6-4528-9dae-af9c8400555d-kube-api-access-h864q\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.621859 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a0d38b-74e6-4528-9dae-af9c8400555d-combined-ca-bundle\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.622074 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a0d38b-74e6-4528-9dae-af9c8400555d-config-data\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.622350 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86a0d38b-74e6-4528-9dae-af9c8400555d-config-data-custom\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.622773 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86a0d38b-74e6-4528-9dae-af9c8400555d-logs\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.623378 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86a0d38b-74e6-4528-9dae-af9c8400555d-logs\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.633060 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86a0d38b-74e6-4528-9dae-af9c8400555d-config-data-custom\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.634154 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a0d38b-74e6-4528-9dae-af9c8400555d-combined-ca-bundle\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.658262 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a0d38b-74e6-4528-9dae-af9c8400555d-config-data\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.660624 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h864q\" (UniqueName: \"kubernetes.io/projected/86a0d38b-74e6-4528-9dae-af9c8400555d-kube-api-access-h864q\") pod \"barbican-keystone-listener-775b8c5454-c9g7t\" (UID: \"86a0d38b-74e6-4528-9dae-af9c8400555d\") " pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.660717 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-659d874887-6h95b"] Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.663175 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.697021 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-659d874887-6h95b"] Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.725767 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/410ba29a-39b4-4468-837d-8b38a94d638d-config-data-custom\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.725836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjmm5\" (UniqueName: \"kubernetes.io/projected/410ba29a-39b4-4468-837d-8b38a94d638d-kube-api-access-pjmm5\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.725864 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/410ba29a-39b4-4468-837d-8b38a94d638d-logs\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.725988 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/410ba29a-39b4-4468-837d-8b38a94d638d-config-data\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.726011 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410ba29a-39b4-4468-837d-8b38a94d638d-combined-ca-bundle\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.731986 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7499595d8b-fm478"] Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.733982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.734736 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-9csj4" podStartSLOduration=4.586137653 podStartE2EDuration="1m34.734710905s" podCreationTimestamp="2026-02-03 12:30:11 +0000 UTC" firstStartedPulling="2026-02-03 12:30:13.900848533 +0000 UTC m=+1531.423924387" lastFinishedPulling="2026-02-03 12:31:44.049421755 +0000 UTC m=+1621.572497639" observedRunningTime="2026-02-03 12:31:45.572380112 +0000 UTC m=+1623.095455986" watchObservedRunningTime="2026-02-03 12:31:45.734710905 +0000 UTC m=+1623.257786769" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.806397 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7499595d8b-fm478"] Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.819962 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=5.085498794 podStartE2EDuration="1m25.819933967s" podCreationTimestamp="2026-02-03 12:30:20 +0000 UTC" firstStartedPulling="2026-02-03 12:30:23.315057694 +0000 UTC m=+1540.838133558" lastFinishedPulling="2026-02-03 12:31:44.049492857 +0000 UTC m=+1621.572568731" observedRunningTime="2026-02-03 12:31:45.652047703 +0000 UTC m=+1623.175123577" watchObservedRunningTime="2026-02-03 12:31:45.819933967 +0000 UTC m=+1623.343009831" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.825650 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.827484 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-config-data\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.827598 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/410ba29a-39b4-4468-837d-8b38a94d638d-config-data-custom\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.827634 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c8edc4-6865-4475-9338-43e90461215a-logs\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.827710 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjmm5\" (UniqueName: \"kubernetes.io/projected/410ba29a-39b4-4468-837d-8b38a94d638d-kube-api-access-pjmm5\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.827745 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/410ba29a-39b4-4468-837d-8b38a94d638d-logs\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.827786 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-combined-ca-bundle\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.827821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gwgx\" (UniqueName: \"kubernetes.io/projected/87c8edc4-6865-4475-9338-43e90461215a-kube-api-access-8gwgx\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.827857 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-config-data-custom\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.827916 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/410ba29a-39b4-4468-837d-8b38a94d638d-config-data\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.827935 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410ba29a-39b4-4468-837d-8b38a94d638d-combined-ca-bundle\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.834830 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/410ba29a-39b4-4468-837d-8b38a94d638d-logs\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.867179 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.919249 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410ba29a-39b4-4468-837d-8b38a94d638d-combined-ca-bundle\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.929650 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/410ba29a-39b4-4468-837d-8b38a94d638d-config-data-custom\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.929834 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-combined-ca-bundle\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.929913 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gwgx\" (UniqueName: \"kubernetes.io/projected/87c8edc4-6865-4475-9338-43e90461215a-kube-api-access-8gwgx\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.929960 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-config-data-custom\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.930063 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-config-data\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.930149 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c8edc4-6865-4475-9338-43e90461215a-logs\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.930350 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/410ba29a-39b4-4468-837d-8b38a94d638d-config-data\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.930686 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c8edc4-6865-4475-9338-43e90461215a-logs\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.932510 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjmm5\" (UniqueName: \"kubernetes.io/projected/410ba29a-39b4-4468-837d-8b38a94d638d-kube-api-access-pjmm5\") pod \"barbican-worker-659d874887-6h95b\" (UID: \"410ba29a-39b4-4468-837d-8b38a94d638d\") " pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.951147 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-config-data\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.969385 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-config-data-custom\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.969704 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gwgx\" (UniqueName: \"kubernetes.io/projected/87c8edc4-6865-4475-9338-43e90461215a-kube-api-access-8gwgx\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:45 crc kubenswrapper[4820]: I0203 12:31:45.970311 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-combined-ca-bundle\") pod \"barbican-api-7499595d8b-fm478\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:46 crc kubenswrapper[4820]: I0203 12:31:46.060609 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-659d874887-6h95b" Feb 03 12:31:46 crc kubenswrapper[4820]: I0203 12:31:46.084620 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:46 crc kubenswrapper[4820]: I0203 12:31:46.344169 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-78bff7b94c-t49mw"] Feb 03 12:31:46 crc kubenswrapper[4820]: I0203 12:31:46.387451 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6ccd68b7f-9xjs9"] Feb 03 12:31:46 crc kubenswrapper[4820]: W0203 12:31:46.427062 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode460cf1d_b4e8_4bc6_89df_3fa68d972a33.slice/crio-f56dfdf8e101ae94217147a53ebaddc63716c1d85322da8e6346beb52226fa3c WatchSource:0}: Error finding container f56dfdf8e101ae94217147a53ebaddc63716c1d85322da8e6346beb52226fa3c: Status 404 returned error can't find the container with id f56dfdf8e101ae94217147a53ebaddc63716c1d85322da8e6346beb52226fa3c Feb 03 12:31:46 crc kubenswrapper[4820]: I0203 12:31:46.637714 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6ccd68b7f-9xjs9" event={"ID":"c5d266f2-257d-4f06-9237-b34d67b51245","Type":"ContainerStarted","Data":"71001acc46399ccc16b4c2b88d9bef305d1572f5325eed7a6a21fcd86a083d92"} Feb 03 12:31:46 crc kubenswrapper[4820]: I0203 12:31:46.641064 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78bff7b94c-t49mw" event={"ID":"e460cf1d-b4e8-4bc6-89df-3fa68d972a33","Type":"ContainerStarted","Data":"f56dfdf8e101ae94217147a53ebaddc63716c1d85322da8e6346beb52226fa3c"} Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.293008 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9dfbf858-g4qlm"] Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.302817 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b"] Feb 03 12:31:47 crc kubenswrapper[4820]: W0203 12:31:47.323238 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14f188d1_7883_471d_9564_01f405548b98.slice/crio-12097bb6ec02a76825f6cb0c55953679baebdf294d5364a3de4de479172834a3 WatchSource:0}: Error finding container 12097bb6ec02a76825f6cb0c55953679baebdf294d5364a3de4de479172834a3: Status 404 returned error can't find the container with id 12097bb6ec02a76825f6cb0c55953679baebdf294d5364a3de4de479172834a3 Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.340196 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-dxczn"] Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.459777 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-659d874887-6h95b"] Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.487335 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-775b8c5454-c9g7t"] Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.625139 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7499595d8b-fm478"] Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.679424 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6ccd68b7f-9xjs9" event={"ID":"c5d266f2-257d-4f06-9237-b34d67b51245","Type":"ContainerStarted","Data":"286fa8e2b65b6ef3494cd439062660104bbf1c87c1aedef2c47ccae148c192e0"} Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.679508 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.684520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-659d874887-6h95b" event={"ID":"410ba29a-39b4-4468-837d-8b38a94d638d","Type":"ContainerStarted","Data":"17e7e681fdea174390607d9894e5067f84ce10d35cbd6d801cd2db4f39edecbb"} Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.689576 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" event={"ID":"66a7d0be-3243-4744-898c-b87b5f91c620","Type":"ContainerStarted","Data":"39f219cb9f642e5b8ba79289c835c8c03c9604dac02b3cabb7f06d4bff441396"} Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.691459 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" event={"ID":"86a0d38b-74e6-4528-9dae-af9c8400555d","Type":"ContainerStarted","Data":"97b00b3016c45f4ca8f2e188a8be1bba2a7a3a31c06e061249c70a7445d725cf"} Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.697030 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" event={"ID":"14f188d1-7883-471d-9564-01f405548b98","Type":"ContainerStarted","Data":"12097bb6ec02a76825f6cb0c55953679baebdf294d5364a3de4de479172834a3"} Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.712988 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6ccd68b7f-9xjs9" podStartSLOduration=3.712967086 podStartE2EDuration="3.712967086s" podCreationTimestamp="2026-02-03 12:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:31:47.701235719 +0000 UTC m=+1625.224311583" watchObservedRunningTime="2026-02-03 12:31:47.712967086 +0000 UTC m=+1625.236042950" Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.713570 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjmdv" event={"ID":"ad9b0bbe-7f17-4347-bda3-5f0a843b3997","Type":"ContainerStarted","Data":"01e6ac6cee71bdfaf782f5e5cd48f0af74d6e75badd32b53a759275905585df0"} Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.724732 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b4rms" event={"ID":"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0","Type":"ContainerStarted","Data":"f74d0b94426904787f61655b3b50e75153fd10c33f2fb6331a01e7bb2c173b9c"} Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.729716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7499595d8b-fm478" event={"ID":"87c8edc4-6865-4475-9338-43e90461215a","Type":"ContainerStarted","Data":"86c3f698def575cf72c4c634dca5b2f885f89518e403e36c218ed6c6a70be7c4"} Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.767863 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9dfbf858-g4qlm" event={"ID":"71ae9703-401c-41f0-8316-9b485f9d0b29","Type":"ContainerStarted","Data":"4736fc952a9474db9dd862c201b730bd2514f53d8f7aa37a6a783c5752b34696"} Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.782064 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-d5vcp" podUID="5f5c5f87-b592-4f5d-86bc-3069985ae61a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Feb 03 12:31:47 crc kubenswrapper[4820]: I0203 12:31:47.832345 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-b4rms" podStartSLOduration=7.711140492 podStartE2EDuration="1m38.832311399s" podCreationTimestamp="2026-02-03 12:30:09 +0000 UTC" firstStartedPulling="2026-02-03 12:30:12.928228348 +0000 UTC m=+1530.451304212" lastFinishedPulling="2026-02-03 12:31:44.049399245 +0000 UTC m=+1621.572475119" observedRunningTime="2026-02-03 12:31:47.767700584 +0000 UTC m=+1625.290776458" watchObservedRunningTime="2026-02-03 12:31:47.832311399 +0000 UTC m=+1625.355387273" Feb 03 12:31:48 crc kubenswrapper[4820]: I0203 12:31:48.760384 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-8ff956445-pzzpk" podUID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 12:31:48 crc kubenswrapper[4820]: I0203 12:31:48.764215 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-8ff956445-pzzpk" podUID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 12:31:48 crc kubenswrapper[4820]: I0203 12:31:48.769829 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-8ff956445-pzzpk" podUID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 12:31:48 crc kubenswrapper[4820]: I0203 12:31:48.796923 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9dfbf858-g4qlm" event={"ID":"71ae9703-401c-41f0-8316-9b485f9d0b29","Type":"ContainerStarted","Data":"c860c1b2ef845649015318e1275bf80250d51f008b745cc276b35816b91106c3"} Feb 03 12:31:48 crc kubenswrapper[4820]: I0203 12:31:48.796986 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9dfbf858-g4qlm" event={"ID":"71ae9703-401c-41f0-8316-9b485f9d0b29","Type":"ContainerStarted","Data":"11b065ef0fa2359f41e6b2d388300024c94ef96da0d65df324abd736fd410652"} Feb 03 12:31:48 crc kubenswrapper[4820]: I0203 12:31:48.812041 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:48 crc kubenswrapper[4820]: I0203 12:31:48.812124 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:31:48 crc kubenswrapper[4820]: I0203 12:31:48.814780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" event={"ID":"66a7d0be-3243-4744-898c-b87b5f91c620","Type":"ContainerStarted","Data":"025996c10336f09c8aa02b7f5465a966499fe35426a17762a34c6aba8500b55b"} Feb 03 12:31:48 crc kubenswrapper[4820]: I0203 12:31:48.837380 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7499595d8b-fm478" event={"ID":"87c8edc4-6865-4475-9338-43e90461215a","Type":"ContainerStarted","Data":"da4c6a2d82a4a4015553c9d2f744136991793d1c02a8ed2d8b66e6943630f5e2"} Feb 03 12:31:49 crc kubenswrapper[4820]: I0203 12:31:49.893145 4820 generic.go:334] "Generic (PLEG): container finished" podID="66a7d0be-3243-4744-898c-b87b5f91c620" containerID="025996c10336f09c8aa02b7f5465a966499fe35426a17762a34c6aba8500b55b" exitCode=0 Feb 03 12:31:49 crc kubenswrapper[4820]: I0203 12:31:49.893618 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" event={"ID":"66a7d0be-3243-4744-898c-b87b5f91c620","Type":"ContainerDied","Data":"025996c10336f09c8aa02b7f5465a966499fe35426a17762a34c6aba8500b55b"} Feb 03 12:31:49 crc kubenswrapper[4820]: I0203 12:31:49.961683 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7499595d8b-fm478" event={"ID":"87c8edc4-6865-4475-9338-43e90461215a","Type":"ContainerStarted","Data":"cbbf79701137a7b14df3be3c1bce49dedb0b9b3453ec699fa6ccfc86e8c30b5d"} Feb 03 12:31:49 crc kubenswrapper[4820]: I0203 12:31:49.962130 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:49 crc kubenswrapper[4820]: I0203 12:31:49.962443 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:50 crc kubenswrapper[4820]: I0203 12:31:50.011259 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-9dfbf858-g4qlm" podStartSLOduration=5.011232698 podStartE2EDuration="5.011232698s" podCreationTimestamp="2026-02-03 12:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:31:48.883950927 +0000 UTC m=+1626.407026811" watchObservedRunningTime="2026-02-03 12:31:50.011232698 +0000 UTC m=+1627.534308552" Feb 03 12:31:50 crc kubenswrapper[4820]: I0203 12:31:50.056681 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7499595d8b-fm478" podStartSLOduration=5.056659315 podStartE2EDuration="5.056659315s" podCreationTimestamp="2026-02-03 12:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:31:50.029186553 +0000 UTC m=+1627.552262417" watchObservedRunningTime="2026-02-03 12:31:50.056659315 +0000 UTC m=+1627.579735179" Feb 03 12:31:50 crc kubenswrapper[4820]: I0203 12:31:50.944838 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7499595d8b-fm478"] Feb 03 12:31:50 crc kubenswrapper[4820]: I0203 12:31:50.995291 4820 generic.go:334] "Generic (PLEG): container finished" podID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerID="01e6ac6cee71bdfaf782f5e5cd48f0af74d6e75badd32b53a759275905585df0" exitCode=0 Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:50.996752 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjmdv" event={"ID":"ad9b0bbe-7f17-4347-bda3-5f0a843b3997","Type":"ContainerDied","Data":"01e6ac6cee71bdfaf782f5e5cd48f0af74d6e75badd32b53a759275905585df0"} Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.323727 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.323765 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.399193 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.448680 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-fdff74856-dfqrf"] Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.451685 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.457630 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.457959 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.479704 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-internal-tls-certs\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.479814 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-config-data\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.479949 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-combined-ca-bundle\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.479986 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5229e26a-15af-47fd-bb4a-956968711984-logs\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.480040 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-config-data-custom\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.480100 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-public-tls-certs\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.480201 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2vvg\" (UniqueName: \"kubernetes.io/projected/5229e26a-15af-47fd-bb4a-956968711984-kube-api-access-r2vvg\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.581491 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-combined-ca-bundle\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.581553 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5229e26a-15af-47fd-bb4a-956968711984-logs\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.581615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-config-data-custom\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.581649 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-public-tls-certs\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.581718 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2vvg\" (UniqueName: \"kubernetes.io/projected/5229e26a-15af-47fd-bb4a-956968711984-kube-api-access-r2vvg\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.581784 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-internal-tls-certs\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.581846 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-config-data\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.592507 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5229e26a-15af-47fd-bb4a-956968711984-logs\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.592583 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-fdff74856-dfqrf"] Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.594752 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-internal-tls-certs\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.595782 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-public-tls-certs\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.596832 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-config-data-custom\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.599208 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-config-data\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.600726 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5229e26a-15af-47fd-bb4a-956968711984-combined-ca-bundle\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.675773 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2vvg\" (UniqueName: \"kubernetes.io/projected/5229e26a-15af-47fd-bb4a-956968711984-kube-api-access-r2vvg\") pod \"barbican-api-fdff74856-dfqrf\" (UID: \"5229e26a-15af-47fd-bb4a-956968711984\") " pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:51 crc kubenswrapper[4820]: I0203 12:31:51.881949 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:52 crc kubenswrapper[4820]: I0203 12:31:52.016218 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7499595d8b-fm478" podUID="87c8edc4-6865-4475-9338-43e90461215a" containerName="barbican-api" containerID="cri-o://cbbf79701137a7b14df3be3c1bce49dedb0b9b3453ec699fa6ccfc86e8c30b5d" gracePeriod=30 Feb 03 12:31:52 crc kubenswrapper[4820]: I0203 12:31:52.016205 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7499595d8b-fm478" podUID="87c8edc4-6865-4475-9338-43e90461215a" containerName="barbican-api-log" containerID="cri-o://da4c6a2d82a4a4015553c9d2f744136991793d1c02a8ed2d8b66e6943630f5e2" gracePeriod=30 Feb 03 12:31:52 crc kubenswrapper[4820]: I0203 12:31:52.018439 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" event={"ID":"66a7d0be-3243-4744-898c-b87b5f91c620","Type":"ContainerStarted","Data":"fc188969c67459fbee4a335908c60772eb8f45e5828125a3adefab81e7450762"} Feb 03 12:31:52 crc kubenswrapper[4820]: I0203 12:31:52.018536 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:31:52 crc kubenswrapper[4820]: I0203 12:31:52.307696 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" podStartSLOduration=8.307665051 podStartE2EDuration="8.307665051s" podCreationTimestamp="2026-02-03 12:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:31:52.299951773 +0000 UTC m=+1629.823027647" watchObservedRunningTime="2026-02-03 12:31:52.307665051 +0000 UTC m=+1629.830740915" Feb 03 12:31:52 crc kubenswrapper[4820]: I0203 12:31:52.325416 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 03 12:31:53 crc kubenswrapper[4820]: I0203 12:31:53.108854 4820 generic.go:334] "Generic (PLEG): container finished" podID="87c8edc4-6865-4475-9338-43e90461215a" containerID="da4c6a2d82a4a4015553c9d2f744136991793d1c02a8ed2d8b66e6943630f5e2" exitCode=143 Feb 03 12:31:53 crc kubenswrapper[4820]: I0203 12:31:53.108981 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7499595d8b-fm478" event={"ID":"87c8edc4-6865-4475-9338-43e90461215a","Type":"ContainerDied","Data":"da4c6a2d82a4a4015553c9d2f744136991793d1c02a8ed2d8b66e6943630f5e2"} Feb 03 12:31:53 crc kubenswrapper[4820]: I0203 12:31:53.128293 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:31:53 crc kubenswrapper[4820]: I0203 12:31:53.128378 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:31:53 crc kubenswrapper[4820]: I0203 12:31:53.129411 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"c22975ba3ab084f0050a4631f4a0020a8a20e782596e29e66b6e36290ae66cee"} pod="openstack/horizon-5fdc8588b4-jtjr8" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 12:31:53 crc kubenswrapper[4820]: I0203 12:31:53.129456 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" containerID="cri-o://c22975ba3ab084f0050a4631f4a0020a8a20e782596e29e66b6e36290ae66cee" gracePeriod=30 Feb 03 12:31:53 crc kubenswrapper[4820]: I0203 12:31:53.622129 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 03 12:31:53 crc kubenswrapper[4820]: I0203 12:31:53.622245 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:31:53 crc kubenswrapper[4820]: I0203 12:31:53.623562 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"a17b9aafc2fe0ed01eea3ac2324d99cc7383038c9d824a7384a0dcac0217d20f"} pod="openstack/horizon-68b4df5bdd-tdb9h" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 12:31:53 crc kubenswrapper[4820]: I0203 12:31:53.623630 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" containerID="cri-o://a17b9aafc2fe0ed01eea3ac2324d99cc7383038c9d824a7384a0dcac0217d20f" gracePeriod=30 Feb 03 12:31:54 crc kubenswrapper[4820]: I0203 12:31:54.131280 4820 generic.go:334] "Generic (PLEG): container finished" podID="87c8edc4-6865-4475-9338-43e90461215a" containerID="cbbf79701137a7b14df3be3c1bce49dedb0b9b3453ec699fa6ccfc86e8c30b5d" exitCode=0 Feb 03 12:31:54 crc kubenswrapper[4820]: I0203 12:31:54.131372 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7499595d8b-fm478" event={"ID":"87c8edc4-6865-4475-9338-43e90461215a","Type":"ContainerDied","Data":"cbbf79701137a7b14df3be3c1bce49dedb0b9b3453ec699fa6ccfc86e8c30b5d"} Feb 03 12:31:54 crc kubenswrapper[4820]: I0203 12:31:54.946874 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.042751 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-config-data-custom\") pod \"87c8edc4-6865-4475-9338-43e90461215a\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.043931 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gwgx\" (UniqueName: \"kubernetes.io/projected/87c8edc4-6865-4475-9338-43e90461215a-kube-api-access-8gwgx\") pod \"87c8edc4-6865-4475-9338-43e90461215a\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.044108 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-config-data\") pod \"87c8edc4-6865-4475-9338-43e90461215a\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.044153 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-combined-ca-bundle\") pod \"87c8edc4-6865-4475-9338-43e90461215a\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.044335 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c8edc4-6865-4475-9338-43e90461215a-logs\") pod \"87c8edc4-6865-4475-9338-43e90461215a\" (UID: \"87c8edc4-6865-4475-9338-43e90461215a\") " Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.045450 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87c8edc4-6865-4475-9338-43e90461215a-logs" (OuterVolumeSpecName: "logs") pod "87c8edc4-6865-4475-9338-43e90461215a" (UID: "87c8edc4-6865-4475-9338-43e90461215a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.094117 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "87c8edc4-6865-4475-9338-43e90461215a" (UID: "87c8edc4-6865-4475-9338-43e90461215a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.103680 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c8edc4-6865-4475-9338-43e90461215a-kube-api-access-8gwgx" (OuterVolumeSpecName: "kube-api-access-8gwgx") pod "87c8edc4-6865-4475-9338-43e90461215a" (UID: "87c8edc4-6865-4475-9338-43e90461215a"). InnerVolumeSpecName "kube-api-access-8gwgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.141730 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87c8edc4-6865-4475-9338-43e90461215a" (UID: "87c8edc4-6865-4475-9338-43e90461215a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.149100 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:31:55 crc kubenswrapper[4820]: E0203 12:31:55.149419 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.151072 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.151094 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87c8edc4-6865-4475-9338-43e90461215a-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.151104 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.151112 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gwgx\" (UniqueName: \"kubernetes.io/projected/87c8edc4-6865-4475-9338-43e90461215a-kube-api-access-8gwgx\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.162243 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7499595d8b-fm478" Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.180230 4820 generic.go:334] "Generic (PLEG): container finished" podID="470b8f27-2959-4890-aed3-361530b83b73" containerID="10efacc21caf698fb5a3a65a239aca041e4d8cd493b7d1a84de2d1c346e3e9a8" exitCode=0 Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.259732 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-config-data" (OuterVolumeSpecName: "config-data") pod "87c8edc4-6865-4475-9338-43e90461215a" (UID: "87c8edc4-6865-4475-9338-43e90461215a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:55 crc kubenswrapper[4820]: W0203 12:31:55.295336 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5229e26a_15af_47fd_bb4a_956968711984.slice/crio-57c27f230f45e91a7914f9de7844839495fab1f88130fa1a9ebe9a848810730c WatchSource:0}: Error finding container 57c27f230f45e91a7914f9de7844839495fab1f88130fa1a9ebe9a848810730c: Status 404 returned error can't find the container with id 57c27f230f45e91a7914f9de7844839495fab1f88130fa1a9ebe9a848810730c Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.350517 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7499595d8b-fm478" event={"ID":"87c8edc4-6865-4475-9338-43e90461215a","Type":"ContainerDied","Data":"86c3f698def575cf72c4c634dca5b2f885f89518e403e36c218ed6c6a70be7c4"} Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.350924 4820 scope.go:117] "RemoveContainer" containerID="cbbf79701137a7b14df3be3c1bce49dedb0b9b3453ec699fa6ccfc86e8c30b5d" Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.350875 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9csj4" event={"ID":"470b8f27-2959-4890-aed3-361530b83b73","Type":"ContainerDied","Data":"10efacc21caf698fb5a3a65a239aca041e4d8cd493b7d1a84de2d1c346e3e9a8"} Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.351125 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-fdff74856-dfqrf"] Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.360447 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c8edc4-6865-4475-9338-43e90461215a-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.402862 4820 scope.go:117] "RemoveContainer" containerID="da4c6a2d82a4a4015553c9d2f744136991793d1c02a8ed2d8b66e6943630f5e2" Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.522807 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7499595d8b-fm478"] Feb 03 12:31:55 crc kubenswrapper[4820]: I0203 12:31:55.534559 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7499595d8b-fm478"] Feb 03 12:31:56 crc kubenswrapper[4820]: I0203 12:31:56.197745 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" event={"ID":"14f188d1-7883-471d-9564-01f405548b98","Type":"ContainerStarted","Data":"d9d04245ce3f9776873e9f2fe6a08b6f0a40b6b784cfb077d1e3186d748551e2"} Feb 03 12:31:56 crc kubenswrapper[4820]: I0203 12:31:56.206731 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fdff74856-dfqrf" event={"ID":"5229e26a-15af-47fd-bb4a-956968711984","Type":"ContainerStarted","Data":"fdf11e700812c6ca48986150761d449797cd22d654b8a70ab3a020a8b12bc35a"} Feb 03 12:31:56 crc kubenswrapper[4820]: I0203 12:31:56.206814 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fdff74856-dfqrf" event={"ID":"5229e26a-15af-47fd-bb4a-956968711984","Type":"ContainerStarted","Data":"57c27f230f45e91a7914f9de7844839495fab1f88130fa1a9ebe9a848810730c"} Feb 03 12:31:56 crc kubenswrapper[4820]: I0203 12:31:56.217826 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjmdv" event={"ID":"ad9b0bbe-7f17-4347-bda3-5f0a843b3997","Type":"ContainerStarted","Data":"b3683503afb588121c1584cbdd117a260ae2352c9eaf4c990911d9c4f1fc17cf"} Feb 03 12:31:56 crc kubenswrapper[4820]: I0203 12:31:56.233978 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-659d874887-6h95b" event={"ID":"410ba29a-39b4-4468-837d-8b38a94d638d","Type":"ContainerStarted","Data":"ba6dfa013f840455e6ce85dd76c24f4e4aee0c6fab07e56838b10cbe8b005581"} Feb 03 12:31:56 crc kubenswrapper[4820]: I0203 12:31:56.234059 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-659d874887-6h95b" event={"ID":"410ba29a-39b4-4468-837d-8b38a94d638d","Type":"ContainerStarted","Data":"aa5f9398eb8fdc93afa778bf28a296da4d4fcdf5b2f1074b1e1627f5bd4c9468"} Feb 03 12:31:56 crc kubenswrapper[4820]: I0203 12:31:56.246635 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" event={"ID":"86a0d38b-74e6-4528-9dae-af9c8400555d","Type":"ContainerStarted","Data":"6b7823e566aadea83cfdb0a80cf869a0215f349e75d541c632733c45384f5f48"} Feb 03 12:31:56 crc kubenswrapper[4820]: I0203 12:31:56.251137 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qjmdv" podStartSLOduration=11.318274868 podStartE2EDuration="20.251115931s" podCreationTimestamp="2026-02-03 12:31:36 +0000 UTC" firstStartedPulling="2026-02-03 12:31:45.833135193 +0000 UTC m=+1623.356211057" lastFinishedPulling="2026-02-03 12:31:54.765976256 +0000 UTC m=+1632.289052120" observedRunningTime="2026-02-03 12:31:56.247000809 +0000 UTC m=+1633.770076673" watchObservedRunningTime="2026-02-03 12:31:56.251115931 +0000 UTC m=+1633.774191795" Feb 03 12:31:56 crc kubenswrapper[4820]: I0203 12:31:56.257864 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78bff7b94c-t49mw" event={"ID":"e460cf1d-b4e8-4bc6-89df-3fa68d972a33","Type":"ContainerStarted","Data":"89bde6a3f663052b940fa07cf6f5ed3ebd395c6ab92a7fe7da910acf4464c836"} Feb 03 12:31:56 crc kubenswrapper[4820]: I0203 12:31:56.277327 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-659d874887-6h95b" podStartSLOduration=4.041243696 podStartE2EDuration="11.277302928s" podCreationTimestamp="2026-02-03 12:31:45 +0000 UTC" firstStartedPulling="2026-02-03 12:31:47.502366689 +0000 UTC m=+1625.025442553" lastFinishedPulling="2026-02-03 12:31:54.738425921 +0000 UTC m=+1632.261501785" observedRunningTime="2026-02-03 12:31:56.277139303 +0000 UTC m=+1633.800215187" watchObservedRunningTime="2026-02-03 12:31:56.277302928 +0000 UTC m=+1633.800378782" Feb 03 12:31:56 crc kubenswrapper[4820]: I0203 12:31:56.326089 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-78bff7b94c-t49mw"] Feb 03 12:31:56 crc kubenswrapper[4820]: I0203 12:31:56.848347 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9csj4" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.016633 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/470b8f27-2959-4890-aed3-361530b83b73-logs\") pod \"470b8f27-2959-4890-aed3-361530b83b73\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.017075 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-combined-ca-bundle\") pod \"470b8f27-2959-4890-aed3-361530b83b73\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.017227 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfqb\" (UniqueName: \"kubernetes.io/projected/470b8f27-2959-4890-aed3-361530b83b73-kube-api-access-9xfqb\") pod \"470b8f27-2959-4890-aed3-361530b83b73\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.017400 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-config-data\") pod \"470b8f27-2959-4890-aed3-361530b83b73\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.017510 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-scripts\") pod \"470b8f27-2959-4890-aed3-361530b83b73\" (UID: \"470b8f27-2959-4890-aed3-361530b83b73\") " Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.017419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/470b8f27-2959-4890-aed3-361530b83b73-logs" (OuterVolumeSpecName: "logs") pod "470b8f27-2959-4890-aed3-361530b83b73" (UID: "470b8f27-2959-4890-aed3-361530b83b73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.018672 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/470b8f27-2959-4890-aed3-361530b83b73-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.026845 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-scripts" (OuterVolumeSpecName: "scripts") pod "470b8f27-2959-4890-aed3-361530b83b73" (UID: "470b8f27-2959-4890-aed3-361530b83b73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.045736 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/470b8f27-2959-4890-aed3-361530b83b73-kube-api-access-9xfqb" (OuterVolumeSpecName: "kube-api-access-9xfqb") pod "470b8f27-2959-4890-aed3-361530b83b73" (UID: "470b8f27-2959-4890-aed3-361530b83b73"). InnerVolumeSpecName "kube-api-access-9xfqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.097082 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-config-data" (OuterVolumeSpecName: "config-data") pod "470b8f27-2959-4890-aed3-361530b83b73" (UID: "470b8f27-2959-4890-aed3-361530b83b73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.097657 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "470b8f27-2959-4890-aed3-361530b83b73" (UID: "470b8f27-2959-4890-aed3-361530b83b73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.121266 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.121301 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfqb\" (UniqueName: \"kubernetes.io/projected/470b8f27-2959-4890-aed3-361530b83b73-kube-api-access-9xfqb\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.121316 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.121324 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470b8f27-2959-4890-aed3-361530b83b73-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.192745 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87c8edc4-6865-4475-9338-43e90461215a" path="/var/lib/kubelet/pods/87c8edc4-6865-4475-9338-43e90461215a/volumes" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.270061 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.270114 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.299326 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fdff74856-dfqrf" event={"ID":"5229e26a-15af-47fd-bb4a-956968711984","Type":"ContainerStarted","Data":"1ce9d81198ba5d9939811869a6e799c867ba25e31982e25bb9bdd509fe5ce54a"} Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.299729 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.299753 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.306660 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9csj4" event={"ID":"470b8f27-2959-4890-aed3-361530b83b73","Type":"ContainerDied","Data":"16d64648b2d81263c6456d9ab40e60ab6e37bae923c46475fe459514befa2a2c"} Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.306711 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d64648b2d81263c6456d9ab40e60ab6e37bae923c46475fe459514befa2a2c" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.306817 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9csj4" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.324193 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" event={"ID":"86a0d38b-74e6-4528-9dae-af9c8400555d","Type":"ContainerStarted","Data":"c05e14488fa47dec32dc7622564041f1a6240e3b7d6a7092bafc9a701dea3c8e"} Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.337136 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-fdff74856-dfqrf" podStartSLOduration=7.337115006 podStartE2EDuration="7.337115006s" podCreationTimestamp="2026-02-03 12:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:31:57.331383962 +0000 UTC m=+1634.854459846" watchObservedRunningTime="2026-02-03 12:31:57.337115006 +0000 UTC m=+1634.860190870" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.350303 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78bff7b94c-t49mw" event={"ID":"e460cf1d-b4e8-4bc6-89df-3fa68d972a33","Type":"ContainerStarted","Data":"d00c04ba739dbc78c13a05b589079c47d3cbbd59bf2cf2388cebdc2e69a078f1"} Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.350524 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-78bff7b94c-t49mw" podUID="e460cf1d-b4e8-4bc6-89df-3fa68d972a33" containerName="barbican-worker-log" containerID="cri-o://89bde6a3f663052b940fa07cf6f5ed3ebd395c6ab92a7fe7da910acf4464c836" gracePeriod=30 Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.350760 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-78bff7b94c-t49mw" podUID="e460cf1d-b4e8-4bc6-89df-3fa68d972a33" containerName="barbican-worker" containerID="cri-o://d00c04ba739dbc78c13a05b589079c47d3cbbd59bf2cf2388cebdc2e69a078f1" gracePeriod=30 Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.370280 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-775b8c5454-c9g7t" podStartSLOduration=5.122101093 podStartE2EDuration="12.370253411s" podCreationTimestamp="2026-02-03 12:31:45 +0000 UTC" firstStartedPulling="2026-02-03 12:31:47.502713659 +0000 UTC m=+1625.025789533" lastFinishedPulling="2026-02-03 12:31:54.750865987 +0000 UTC m=+1632.273941851" observedRunningTime="2026-02-03 12:31:57.360833207 +0000 UTC m=+1634.883909071" watchObservedRunningTime="2026-02-03 12:31:57.370253411 +0000 UTC m=+1634.893329275" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.376006 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" event={"ID":"14f188d1-7883-471d-9564-01f405548b98","Type":"ContainerStarted","Data":"da3e67a2f5780c6fcce9756b2c251c7f8184a0ca6f1d9afa680f463f6a8da537"} Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.412767 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b"] Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.443030 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-78bff7b94c-t49mw" podStartSLOduration=5.186016865 podStartE2EDuration="13.443002766s" podCreationTimestamp="2026-02-03 12:31:44 +0000 UTC" firstStartedPulling="2026-02-03 12:31:46.449191769 +0000 UTC m=+1623.972267633" lastFinishedPulling="2026-02-03 12:31:54.70617767 +0000 UTC m=+1632.229253534" observedRunningTime="2026-02-03 12:31:57.398457993 +0000 UTC m=+1634.921533877" watchObservedRunningTime="2026-02-03 12:31:57.443002766 +0000 UTC m=+1634.966078630" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.513709 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" podStartSLOduration=6.125973007 podStartE2EDuration="13.513684465s" podCreationTimestamp="2026-02-03 12:31:44 +0000 UTC" firstStartedPulling="2026-02-03 12:31:47.351285869 +0000 UTC m=+1624.874361733" lastFinishedPulling="2026-02-03 12:31:54.738997327 +0000 UTC m=+1632.262073191" observedRunningTime="2026-02-03 12:31:57.440773956 +0000 UTC m=+1634.963849820" watchObservedRunningTime="2026-02-03 12:31:57.513684465 +0000 UTC m=+1635.036760329" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.560630 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-656b464f74-h7xjt"] Feb 03 12:31:57 crc kubenswrapper[4820]: E0203 12:31:57.561261 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470b8f27-2959-4890-aed3-361530b83b73" containerName="placement-db-sync" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.561286 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="470b8f27-2959-4890-aed3-361530b83b73" containerName="placement-db-sync" Feb 03 12:31:57 crc kubenswrapper[4820]: E0203 12:31:57.561314 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c8edc4-6865-4475-9338-43e90461215a" containerName="barbican-api" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.561323 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c8edc4-6865-4475-9338-43e90461215a" containerName="barbican-api" Feb 03 12:31:57 crc kubenswrapper[4820]: E0203 12:31:57.561341 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c8edc4-6865-4475-9338-43e90461215a" containerName="barbican-api-log" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.561350 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c8edc4-6865-4475-9338-43e90461215a" containerName="barbican-api-log" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.561606 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c8edc4-6865-4475-9338-43e90461215a" containerName="barbican-api-log" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.561662 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="470b8f27-2959-4890-aed3-361530b83b73" containerName="placement-db-sync" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.561676 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c8edc4-6865-4475-9338-43e90461215a" containerName="barbican-api" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.563164 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.569402 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.569776 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.570245 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.570426 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.581456 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-twgm4" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.612496 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-656b464f74-h7xjt"] Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.652149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-combined-ca-bundle\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.652235 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4pcn\" (UniqueName: \"kubernetes.io/projected/43ecc5a4-8bd1-435c-8514-de23a493ee45-kube-api-access-r4pcn\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.652286 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-public-tls-certs\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.652312 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-config-data\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.652381 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-internal-tls-certs\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.652412 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-scripts\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.652539 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ecc5a4-8bd1-435c-8514-de23a493ee45-logs\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.754709 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4pcn\" (UniqueName: \"kubernetes.io/projected/43ecc5a4-8bd1-435c-8514-de23a493ee45-kube-api-access-r4pcn\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.754807 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-public-tls-certs\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.754938 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-config-data\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.754981 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-internal-tls-certs\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.755037 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-scripts\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.755156 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ecc5a4-8bd1-435c-8514-de23a493ee45-logs\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.755242 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-combined-ca-bundle\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.761559 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ecc5a4-8bd1-435c-8514-de23a493ee45-logs\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.764450 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-config-data\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.774635 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-scripts\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.775814 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-combined-ca-bundle\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.782863 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-internal-tls-certs\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.793452 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ecc5a4-8bd1-435c-8514-de23a493ee45-public-tls-certs\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.804721 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4pcn\" (UniqueName: \"kubernetes.io/projected/43ecc5a4-8bd1-435c-8514-de23a493ee45-kube-api-access-r4pcn\") pod \"placement-656b464f74-h7xjt\" (UID: \"43ecc5a4-8bd1-435c-8514-de23a493ee45\") " pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:57 crc kubenswrapper[4820]: I0203 12:31:57.894801 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.346749 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjmdv" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" probeResult="failure" output=< Feb 03 12:31:58 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:31:58 crc kubenswrapper[4820]: > Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.387768 4820 generic.go:334] "Generic (PLEG): container finished" podID="e460cf1d-b4e8-4bc6-89df-3fa68d972a33" containerID="89bde6a3f663052b940fa07cf6f5ed3ebd395c6ab92a7fe7da910acf4464c836" exitCode=143 Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.387917 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78bff7b94c-t49mw" event={"ID":"e460cf1d-b4e8-4bc6-89df-3fa68d972a33","Type":"ContainerDied","Data":"89bde6a3f663052b940fa07cf6f5ed3ebd395c6ab92a7fe7da910acf4464c836"} Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.431620 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5j76d"] Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.434731 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.451659 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5j76d"] Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.575516 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64xzl\" (UniqueName: \"kubernetes.io/projected/c0fcb4b0-e763-4309-8097-facfd4782cfb-kube-api-access-64xzl\") pod \"community-operators-5j76d\" (UID: \"c0fcb4b0-e763-4309-8097-facfd4782cfb\") " pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.575586 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fcb4b0-e763-4309-8097-facfd4782cfb-catalog-content\") pod \"community-operators-5j76d\" (UID: \"c0fcb4b0-e763-4309-8097-facfd4782cfb\") " pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.576267 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fcb4b0-e763-4309-8097-facfd4782cfb-utilities\") pod \"community-operators-5j76d\" (UID: \"c0fcb4b0-e763-4309-8097-facfd4782cfb\") " pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.679006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fcb4b0-e763-4309-8097-facfd4782cfb-utilities\") pod \"community-operators-5j76d\" (UID: \"c0fcb4b0-e763-4309-8097-facfd4782cfb\") " pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.679203 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64xzl\" (UniqueName: \"kubernetes.io/projected/c0fcb4b0-e763-4309-8097-facfd4782cfb-kube-api-access-64xzl\") pod \"community-operators-5j76d\" (UID: \"c0fcb4b0-e763-4309-8097-facfd4782cfb\") " pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.679238 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fcb4b0-e763-4309-8097-facfd4782cfb-catalog-content\") pod \"community-operators-5j76d\" (UID: \"c0fcb4b0-e763-4309-8097-facfd4782cfb\") " pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.679601 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fcb4b0-e763-4309-8097-facfd4782cfb-utilities\") pod \"community-operators-5j76d\" (UID: \"c0fcb4b0-e763-4309-8097-facfd4782cfb\") " pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.679829 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fcb4b0-e763-4309-8097-facfd4782cfb-catalog-content\") pod \"community-operators-5j76d\" (UID: \"c0fcb4b0-e763-4309-8097-facfd4782cfb\") " pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.701634 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64xzl\" (UniqueName: \"kubernetes.io/projected/c0fcb4b0-e763-4309-8097-facfd4782cfb-kube-api-access-64xzl\") pod \"community-operators-5j76d\" (UID: \"c0fcb4b0-e763-4309-8097-facfd4782cfb\") " pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:31:58 crc kubenswrapper[4820]: I0203 12:31:58.778580 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:31:59 crc kubenswrapper[4820]: I0203 12:31:59.402015 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" podUID="14f188d1-7883-471d-9564-01f405548b98" containerName="barbican-keystone-listener-log" containerID="cri-o://d9d04245ce3f9776873e9f2fe6a08b6f0a40b6b784cfb077d1e3186d748551e2" gracePeriod=30 Feb 03 12:31:59 crc kubenswrapper[4820]: I0203 12:31:59.402083 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" podUID="14f188d1-7883-471d-9564-01f405548b98" containerName="barbican-keystone-listener" containerID="cri-o://da3e67a2f5780c6fcce9756b2c251c7f8184a0ca6f1d9afa680f463f6a8da537" gracePeriod=30 Feb 03 12:32:00 crc kubenswrapper[4820]: I0203 12:32:00.416236 4820 generic.go:334] "Generic (PLEG): container finished" podID="14f188d1-7883-471d-9564-01f405548b98" containerID="d9d04245ce3f9776873e9f2fe6a08b6f0a40b6b784cfb077d1e3186d748551e2" exitCode=143 Feb 03 12:32:00 crc kubenswrapper[4820]: I0203 12:32:00.416297 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" event={"ID":"14f188d1-7883-471d-9564-01f405548b98","Type":"ContainerDied","Data":"d9d04245ce3f9776873e9f2fe6a08b6f0a40b6b784cfb077d1e3186d748551e2"} Feb 03 12:32:00 crc kubenswrapper[4820]: I0203 12:32:00.506149 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:32:00 crc kubenswrapper[4820]: I0203 12:32:00.587333 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hmvhn"] Feb 03 12:32:00 crc kubenswrapper[4820]: I0203 12:32:00.587668 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" podUID="09cdfd30-121c-4d95-9a12-515eda5d3ba3" containerName="dnsmasq-dns" containerID="cri-o://3c3f7a455f66a4864807db849da6c1d029e46ad404c854baa1cb1c0c7a26cfa9" gracePeriod=10 Feb 03 12:32:00 crc kubenswrapper[4820]: I0203 12:32:00.640332 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:32:00 crc kubenswrapper[4820]: I0203 12:32:00.745416 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:32:01 crc kubenswrapper[4820]: I0203 12:32:01.435386 4820 generic.go:334] "Generic (PLEG): container finished" podID="4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0" containerID="f74d0b94426904787f61655b3b50e75153fd10c33f2fb6331a01e7bb2c173b9c" exitCode=0 Feb 03 12:32:01 crc kubenswrapper[4820]: I0203 12:32:01.435483 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b4rms" event={"ID":"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0","Type":"ContainerDied","Data":"f74d0b94426904787f61655b3b50e75153fd10c33f2fb6331a01e7bb2c173b9c"} Feb 03 12:32:01 crc kubenswrapper[4820]: I0203 12:32:01.446037 4820 generic.go:334] "Generic (PLEG): container finished" podID="09cdfd30-121c-4d95-9a12-515eda5d3ba3" containerID="3c3f7a455f66a4864807db849da6c1d029e46ad404c854baa1cb1c0c7a26cfa9" exitCode=0 Feb 03 12:32:01 crc kubenswrapper[4820]: I0203 12:32:01.447265 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" event={"ID":"09cdfd30-121c-4d95-9a12-515eda5d3ba3","Type":"ContainerDied","Data":"3c3f7a455f66a4864807db849da6c1d029e46ad404c854baa1cb1c0c7a26cfa9"} Feb 03 12:32:03 crc kubenswrapper[4820]: I0203 12:32:03.781182 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" podUID="09cdfd30-121c-4d95-9a12-515eda5d3ba3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.170:5353: connect: connection refused" Feb 03 12:32:03 crc kubenswrapper[4820]: I0203 12:32:03.916401 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:32:04 crc kubenswrapper[4820]: I0203 12:32:04.596375 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-fdff74856-dfqrf" Feb 03 12:32:04 crc kubenswrapper[4820]: I0203 12:32:04.670578 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9dfbf858-g4qlm"] Feb 03 12:32:04 crc kubenswrapper[4820]: I0203 12:32:04.673632 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9dfbf858-g4qlm" podUID="71ae9703-401c-41f0-8316-9b485f9d0b29" containerName="barbican-api-log" containerID="cri-o://11b065ef0fa2359f41e6b2d388300024c94ef96da0d65df324abd736fd410652" gracePeriod=30 Feb 03 12:32:04 crc kubenswrapper[4820]: I0203 12:32:04.673924 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9dfbf858-g4qlm" podUID="71ae9703-401c-41f0-8316-9b485f9d0b29" containerName="barbican-api" containerID="cri-o://c860c1b2ef845649015318e1275bf80250d51f008b745cc276b35816b91106c3" gracePeriod=30 Feb 03 12:32:05 crc kubenswrapper[4820]: I0203 12:32:05.516674 4820 generic.go:334] "Generic (PLEG): container finished" podID="71ae9703-401c-41f0-8316-9b485f9d0b29" containerID="11b065ef0fa2359f41e6b2d388300024c94ef96da0d65df324abd736fd410652" exitCode=143 Feb 03 12:32:05 crc kubenswrapper[4820]: I0203 12:32:05.516760 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9dfbf858-g4qlm" event={"ID":"71ae9703-401c-41f0-8316-9b485f9d0b29","Type":"ContainerDied","Data":"11b065ef0fa2359f41e6b2d388300024c94ef96da0d65df324abd736fd410652"} Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.160207 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b4rms" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.198547 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-etc-machine-id\") pod \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.198665 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0" (UID: "4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.198721 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-db-sync-config-data\") pod \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.198796 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-config-data\") pod \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.198850 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrjrf\" (UniqueName: \"kubernetes.io/projected/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-kube-api-access-lrjrf\") pod \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.198982 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-scripts\") pod \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.199007 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-combined-ca-bundle\") pod \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\" (UID: \"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0\") " Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.199959 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.239204 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-scripts" (OuterVolumeSpecName: "scripts") pod "4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0" (UID: "4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.266271 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-kube-api-access-lrjrf" (OuterVolumeSpecName: "kube-api-access-lrjrf") pod "4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0" (UID: "4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0"). InnerVolumeSpecName "kube-api-access-lrjrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.302432 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrjrf\" (UniqueName: \"kubernetes.io/projected/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-kube-api-access-lrjrf\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.302474 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.310078 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0" (UID: "4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.321080 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-config-data" (OuterVolumeSpecName: "config-data") pod "4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0" (UID: "4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.390326 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0" (UID: "4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.409364 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.409407 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.409424 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.539146 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b4rms" event={"ID":"4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0","Type":"ContainerDied","Data":"2be3fe58abe533c005d039e6eac08a044a0d5ed04a5f4dbddca03cfcbfda2436"} Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.539190 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2be3fe58abe533c005d039e6eac08a044a0d5ed04a5f4dbddca03cfcbfda2436" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.539437 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b4rms" Feb 03 12:32:06 crc kubenswrapper[4820]: E0203 12:32:06.858626 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 03 12:32:06 crc kubenswrapper[4820]: E0203 12:32:06.858865 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fftdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e2cc54f2-167c-4c79-b616-2e1cd122fed2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:32:06 crc kubenswrapper[4820]: E0203 12:32:06.860324 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="e2cc54f2-167c-4c79-b616-2e1cd122fed2" Feb 03 12:32:06 crc kubenswrapper[4820]: I0203 12:32:06.875123 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.029319 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-swift-storage-0\") pod \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.029620 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-ovsdbserver-nb\") pod \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.029707 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvnlr\" (UniqueName: \"kubernetes.io/projected/09cdfd30-121c-4d95-9a12-515eda5d3ba3-kube-api-access-qvnlr\") pod \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.029864 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-ovsdbserver-sb\") pod \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.029931 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-svc\") pod \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.030079 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-config\") pod \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.049188 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cdfd30-121c-4d95-9a12-515eda5d3ba3-kube-api-access-qvnlr" (OuterVolumeSpecName: "kube-api-access-qvnlr") pod "09cdfd30-121c-4d95-9a12-515eda5d3ba3" (UID: "09cdfd30-121c-4d95-9a12-515eda5d3ba3"). InnerVolumeSpecName "kube-api-access-qvnlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.131947 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvnlr\" (UniqueName: \"kubernetes.io/projected/09cdfd30-121c-4d95-9a12-515eda5d3ba3-kube-api-access-qvnlr\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.139572 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09cdfd30-121c-4d95-9a12-515eda5d3ba3" (UID: "09cdfd30-121c-4d95-9a12-515eda5d3ba3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.164499 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-config" (OuterVolumeSpecName: "config") pod "09cdfd30-121c-4d95-9a12-515eda5d3ba3" (UID: "09cdfd30-121c-4d95-9a12-515eda5d3ba3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.179796 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09cdfd30-121c-4d95-9a12-515eda5d3ba3" (UID: "09cdfd30-121c-4d95-9a12-515eda5d3ba3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:32:07 crc kubenswrapper[4820]: E0203 12:32:07.181318 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-swift-storage-0 podName:09cdfd30-121c-4d95-9a12-515eda5d3ba3 nodeName:}" failed. No retries permitted until 2026-02-03 12:32:07.681284288 +0000 UTC m=+1645.204360232 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-swift-storage-0" (UniqueName: "kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-swift-storage-0") pod "09cdfd30-121c-4d95-9a12-515eda5d3ba3" (UID: "09cdfd30-121c-4d95-9a12-515eda5d3ba3") : error deleting /var/lib/kubelet/pods/09cdfd30-121c-4d95-9a12-515eda5d3ba3/volume-subpaths: remove /var/lib/kubelet/pods/09cdfd30-121c-4d95-9a12-515eda5d3ba3/volume-subpaths: no such file or directory Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.181629 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09cdfd30-121c-4d95-9a12-515eda5d3ba3" (UID: "09cdfd30-121c-4d95-9a12-515eda5d3ba3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.234938 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.234973 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.234988 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.235002 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.505152 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-656b464f74-h7xjt"] Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.563022 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:32:07 crc kubenswrapper[4820]: E0203 12:32:07.573621 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cdfd30-121c-4d95-9a12-515eda5d3ba3" containerName="dnsmasq-dns" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.573665 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cdfd30-121c-4d95-9a12-515eda5d3ba3" containerName="dnsmasq-dns" Feb 03 12:32:07 crc kubenswrapper[4820]: E0203 12:32:07.573709 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cdfd30-121c-4d95-9a12-515eda5d3ba3" containerName="init" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.573718 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cdfd30-121c-4d95-9a12-515eda5d3ba3" containerName="init" Feb 03 12:32:07 crc kubenswrapper[4820]: E0203 12:32:07.573729 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0" containerName="cinder-db-sync" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.573736 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0" containerName="cinder-db-sync" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.574086 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="09cdfd30-121c-4d95-9a12-515eda5d3ba3" containerName="dnsmasq-dns" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.574111 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0" containerName="cinder-db-sync" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.575605 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.581534 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.582257 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.582502 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.583026 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.583267 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hmvhn" event={"ID":"09cdfd30-121c-4d95-9a12-515eda5d3ba3","Type":"ContainerDied","Data":"0ba75ed4e3a00a5013be5595a1e32aa89c0cc94ed25311dbc535b976aeb5e2ec"} Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.583349 4820 scope.go:117] "RemoveContainer" containerID="3c3f7a455f66a4864807db849da6c1d029e46ad404c854baa1cb1c0c7a26cfa9" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.586518 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vfvnz" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.589514 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-656b464f74-h7xjt" event={"ID":"43ecc5a4-8bd1-435c-8514-de23a493ee45","Type":"ContainerStarted","Data":"639605d9f517691165f50a2327875d4fba984e47bb2d6b7d21b8dbeabb8cc3b1"} Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.624587 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.723092 4820 scope.go:117] "RemoveContainer" containerID="0b6ab62c7e4f3f1e72035ba2efe6dd41845452b2781b1fcae30d6cc43eb978ab" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.752954 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-swift-storage-0\") pod \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\" (UID: \"09cdfd30-121c-4d95-9a12-515eda5d3ba3\") " Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.765309 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-sfhnq"] Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.779197 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-scripts\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.779328 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.779542 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cvnc\" (UniqueName: \"kubernetes.io/projected/1de311c0-cca0-4f9f-8897-6c5239e71368-kube-api-access-5cvnc\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.779867 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.780113 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1de311c0-cca0-4f9f-8897-6c5239e71368-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.791474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-config-data\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.791725 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "09cdfd30-121c-4d95-9a12-515eda5d3ba3" (UID: "09cdfd30-121c-4d95-9a12-515eda5d3ba3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:32:07 crc kubenswrapper[4820]: W0203 12:32:07.796841 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0fcb4b0_e763_4309_8097_facfd4782cfb.slice/crio-340df6559d1d7752ca27b7f5ec95bcee8aad05e9f71ed615516d3ae7c876804e WatchSource:0}: Error finding container 340df6559d1d7752ca27b7f5ec95bcee8aad05e9f71ed615516d3ae7c876804e: Status 404 returned error can't find the container with id 340df6559d1d7752ca27b7f5ec95bcee8aad05e9f71ed615516d3ae7c876804e Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.804111 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.833358 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-sfhnq"] Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.893914 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-config\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.894369 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.894529 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.894742 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.894882 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1de311c0-cca0-4f9f-8897-6c5239e71368-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.895174 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-config-data\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.895316 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.895457 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5qn\" (UniqueName: \"kubernetes.io/projected/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-kube-api-access-vp5qn\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.895686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-scripts\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.895808 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.896106 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.896252 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cvnc\" (UniqueName: \"kubernetes.io/projected/1de311c0-cca0-4f9f-8897-6c5239e71368-kube-api-access-5cvnc\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.896582 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09cdfd30-121c-4d95-9a12-515eda5d3ba3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.902054 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1de311c0-cca0-4f9f-8897-6c5239e71368-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.929618 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-scripts\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.939688 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.955153 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5j76d"] Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.955433 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cvnc\" (UniqueName: \"kubernetes.io/projected/1de311c0-cca0-4f9f-8897-6c5239e71368-kube-api-access-5cvnc\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:07 crc kubenswrapper[4820]: I0203 12:32:07.956646 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-config-data\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.000701 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.001221 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-config\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.001275 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.001368 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.001494 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.001523 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp5qn\" (UniqueName: \"kubernetes.io/projected/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-kube-api-access-vp5qn\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.012372 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.013359 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.014084 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.014788 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-config\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.018708 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.022446 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.022951 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.047658 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hmvhn"] Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.102706 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp5qn\" (UniqueName: \"kubernetes.io/projected/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-kube-api-access-vp5qn\") pod \"dnsmasq-dns-5c9776ccc5-sfhnq\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.139221 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hmvhn"] Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.181839 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.220307 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9dfbf858-g4qlm" podUID="71ae9703-401c-41f0-8316-9b485f9d0b29" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:45368->10.217.0.179:9311: read: connection reset by peer" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.220656 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9dfbf858-g4qlm" podUID="71ae9703-401c-41f0-8316-9b485f9d0b29" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:45354->10.217.0.179:9311: read: connection reset by peer" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.237943 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.241804 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.268318 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.277691 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.311716 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-config-data\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.311817 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/526ca20e-1fce-4a0e-bde8-f82c887e5d82-logs\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.311869 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-config-data-custom\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.311932 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/526ca20e-1fce-4a0e-bde8-f82c887e5d82-etc-machine-id\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.311960 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-scripts\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.312000 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbsb9\" (UniqueName: \"kubernetes.io/projected/526ca20e-1fce-4a0e-bde8-f82c887e5d82-kube-api-access-jbsb9\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.312057 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.413182 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjmdv" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" probeResult="failure" output=< Feb 03 12:32:08 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:32:08 crc kubenswrapper[4820]: > Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.646370 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/526ca20e-1fce-4a0e-bde8-f82c887e5d82-etc-machine-id\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.646434 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-scripts\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.646483 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbsb9\" (UniqueName: \"kubernetes.io/projected/526ca20e-1fce-4a0e-bde8-f82c887e5d82-kube-api-access-jbsb9\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.646573 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.646649 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-config-data\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.646717 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/526ca20e-1fce-4a0e-bde8-f82c887e5d82-logs\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.646784 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-config-data-custom\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.648300 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/526ca20e-1fce-4a0e-bde8-f82c887e5d82-etc-machine-id\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.651659 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/526ca20e-1fce-4a0e-bde8-f82c887e5d82-logs\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.654641 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-config-data-custom\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.657439 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-scripts\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.657990 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.685748 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-config-data\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.694658 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbsb9\" (UniqueName: \"kubernetes.io/projected/526ca20e-1fce-4a0e-bde8-f82c887e5d82-kube-api-access-jbsb9\") pod \"cinder-api-0\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.756285 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.787729 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5j76d" event={"ID":"c0fcb4b0-e763-4309-8097-facfd4782cfb","Type":"ContainerStarted","Data":"340df6559d1d7752ca27b7f5ec95bcee8aad05e9f71ed615516d3ae7c876804e"} Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.804121 4820 generic.go:334] "Generic (PLEG): container finished" podID="71ae9703-401c-41f0-8316-9b485f9d0b29" containerID="c860c1b2ef845649015318e1275bf80250d51f008b745cc276b35816b91106c3" exitCode=0 Feb 03 12:32:08 crc kubenswrapper[4820]: I0203 12:32:08.804291 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9dfbf858-g4qlm" event={"ID":"71ae9703-401c-41f0-8316-9b485f9d0b29","Type":"ContainerDied","Data":"c860c1b2ef845649015318e1275bf80250d51f008b745cc276b35816b91106c3"} Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.496325 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cdfd30-121c-4d95-9a12-515eda5d3ba3" path="/var/lib/kubelet/pods/09cdfd30-121c-4d95-9a12-515eda5d3ba3/volumes" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.597957 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.840376 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-combined-ca-bundle\") pod \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.840438 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-config-data\") pod \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.840500 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2cc54f2-167c-4c79-b616-2e1cd122fed2-run-httpd\") pod \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.840590 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-scripts\") pod \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.840622 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-sg-core-conf-yaml\") pod \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.840658 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2cc54f2-167c-4c79-b616-2e1cd122fed2-log-httpd\") pod \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.840724 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fftdb\" (UniqueName: \"kubernetes.io/projected/e2cc54f2-167c-4c79-b616-2e1cd122fed2-kube-api-access-fftdb\") pod \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\" (UID: \"e2cc54f2-167c-4c79-b616-2e1cd122fed2\") " Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.842919 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2cc54f2-167c-4c79-b616-2e1cd122fed2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e2cc54f2-167c-4c79-b616-2e1cd122fed2" (UID: "e2cc54f2-167c-4c79-b616-2e1cd122fed2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.846787 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2cc54f2-167c-4c79-b616-2e1cd122fed2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e2cc54f2-167c-4c79-b616-2e1cd122fed2" (UID: "e2cc54f2-167c-4c79-b616-2e1cd122fed2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.868094 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2cc54f2-167c-4c79-b616-2e1cd122fed2" (UID: "e2cc54f2-167c-4c79-b616-2e1cd122fed2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.868150 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-scripts" (OuterVolumeSpecName: "scripts") pod "e2cc54f2-167c-4c79-b616-2e1cd122fed2" (UID: "e2cc54f2-167c-4c79-b616-2e1cd122fed2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.869846 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-config-data" (OuterVolumeSpecName: "config-data") pod "e2cc54f2-167c-4c79-b616-2e1cd122fed2" (UID: "e2cc54f2-167c-4c79-b616-2e1cd122fed2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.877259 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2cc54f2-167c-4c79-b616-2e1cd122fed2-kube-api-access-fftdb" (OuterVolumeSpecName: "kube-api-access-fftdb") pod "e2cc54f2-167c-4c79-b616-2e1cd122fed2" (UID: "e2cc54f2-167c-4c79-b616-2e1cd122fed2"). InnerVolumeSpecName "kube-api-access-fftdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.881061 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e2cc54f2-167c-4c79-b616-2e1cd122fed2" (UID: "e2cc54f2-167c-4c79-b616-2e1cd122fed2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.890099 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-656b464f74-h7xjt" event={"ID":"43ecc5a4-8bd1-435c-8514-de23a493ee45","Type":"ContainerStarted","Data":"ddebb023589ff446db4b7f0006fb506177845578c44db9cc3fa3462e15db3926"} Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.903202 4820 generic.go:334] "Generic (PLEG): container finished" podID="c0fcb4b0-e763-4309-8097-facfd4782cfb" containerID="d612338d650e9185ebb3ad6d02cb11504c7cbe592261cf4c4e977d3faf21db66" exitCode=0 Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.903302 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5j76d" event={"ID":"c0fcb4b0-e763-4309-8097-facfd4782cfb","Type":"ContainerDied","Data":"d612338d650e9185ebb3ad6d02cb11504c7cbe592261cf4c4e977d3faf21db66"} Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.933107 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2cc54f2-167c-4c79-b616-2e1cd122fed2","Type":"ContainerDied","Data":"2323248a6e2e02ad0a646382f15866bc79731b9692262e9b3d051f749519af92"} Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.933290 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.943757 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.943799 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.943811 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2cc54f2-167c-4c79-b616-2e1cd122fed2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.943823 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.943835 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2cc54f2-167c-4c79-b616-2e1cd122fed2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.943848 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2cc54f2-167c-4c79-b616-2e1cd122fed2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.943860 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fftdb\" (UniqueName: \"kubernetes.io/projected/e2cc54f2-167c-4c79-b616-2e1cd122fed2-kube-api-access-fftdb\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:09 crc kubenswrapper[4820]: I0203 12:32:09.944816 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-sfhnq"] Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.145465 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:32:10 crc kubenswrapper[4820]: E0203 12:32:10.146032 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.534370 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.534461 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.550174 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.553883 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.572945 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.590227 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.604642 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.665844 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.711148 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cc94de-c839-4f2b-82a4-afb000afefe4-log-httpd\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.711263 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqf5k\" (UniqueName: \"kubernetes.io/projected/48cc94de-c839-4f2b-82a4-afb000afefe4-kube-api-access-jqf5k\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.711452 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-config-data\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.711664 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-scripts\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.711703 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.711739 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cc94de-c839-4f2b-82a4-afb000afefe4-run-httpd\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.711832 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.963168 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-config-data\") pod \"71ae9703-401c-41f0-8316-9b485f9d0b29\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.963230 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ae9703-401c-41f0-8316-9b485f9d0b29-logs\") pod \"71ae9703-401c-41f0-8316-9b485f9d0b29\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.963285 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-combined-ca-bundle\") pod \"71ae9703-401c-41f0-8316-9b485f9d0b29\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.963337 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmmx8\" (UniqueName: \"kubernetes.io/projected/71ae9703-401c-41f0-8316-9b485f9d0b29-kube-api-access-pmmx8\") pod \"71ae9703-401c-41f0-8316-9b485f9d0b29\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.963398 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-config-data-custom\") pod \"71ae9703-401c-41f0-8316-9b485f9d0b29\" (UID: \"71ae9703-401c-41f0-8316-9b485f9d0b29\") " Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.963721 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-scripts\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.963758 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.963777 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cc94de-c839-4f2b-82a4-afb000afefe4-run-httpd\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.963827 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.963852 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cc94de-c839-4f2b-82a4-afb000afefe4-log-httpd\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.963885 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqf5k\" (UniqueName: \"kubernetes.io/projected/48cc94de-c839-4f2b-82a4-afb000afefe4-kube-api-access-jqf5k\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.963973 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-config-data\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.973306 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71ae9703-401c-41f0-8316-9b485f9d0b29-logs" (OuterVolumeSpecName: "logs") pod "71ae9703-401c-41f0-8316-9b485f9d0b29" (UID: "71ae9703-401c-41f0-8316-9b485f9d0b29"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.979311 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.982087 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cc94de-c839-4f2b-82a4-afb000afefe4-run-httpd\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:10 crc kubenswrapper[4820]: I0203 12:32:10.982219 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cc94de-c839-4f2b-82a4-afb000afefe4-log-httpd\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.025112 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "71ae9703-401c-41f0-8316-9b485f9d0b29" (UID: "71ae9703-401c-41f0-8316-9b485f9d0b29"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.033348 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-config-data\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.033865 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.036172 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ae9703-401c-41f0-8316-9b485f9d0b29-kube-api-access-pmmx8" (OuterVolumeSpecName: "kube-api-access-pmmx8") pod "71ae9703-401c-41f0-8316-9b485f9d0b29" (UID: "71ae9703-401c-41f0-8316-9b485f9d0b29"). InnerVolumeSpecName "kube-api-access-pmmx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.058847 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-scripts\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.059720 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.069144 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ae9703-401c-41f0-8316-9b485f9d0b29-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.069183 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmmx8\" (UniqueName: \"kubernetes.io/projected/71ae9703-401c-41f0-8316-9b485f9d0b29-kube-api-access-pmmx8\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.069200 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.078683 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqf5k\" (UniqueName: \"kubernetes.io/projected/48cc94de-c839-4f2b-82a4-afb000afefe4-kube-api-access-jqf5k\") pod \"ceilometer-0\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " pod="openstack/ceilometer-0" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.098398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9dfbf858-g4qlm" event={"ID":"71ae9703-401c-41f0-8316-9b485f9d0b29","Type":"ContainerDied","Data":"4736fc952a9474db9dd862c201b730bd2514f53d8f7aa37a6a783c5752b34696"} Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.098451 4820 scope.go:117] "RemoveContainer" containerID="c860c1b2ef845649015318e1275bf80250d51f008b745cc276b35816b91106c3" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.098596 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9dfbf858-g4qlm" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.107533 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" event={"ID":"0e9efb73-1fc6-4e04-9b3c-89226c1d717c","Type":"ContainerStarted","Data":"fbef18f8d1c741bf0c91268131e3adbc2d701f2a94ffea604f5c426830196486"} Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.108988 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.173297 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-config-data" (OuterVolumeSpecName: "config-data") pod "71ae9703-401c-41f0-8316-9b485f9d0b29" (UID: "71ae9703-401c-41f0-8316-9b485f9d0b29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.175365 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.186997 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71ae9703-401c-41f0-8316-9b485f9d0b29" (UID: "71ae9703-401c-41f0-8316-9b485f9d0b29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.208138 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2cc54f2-167c-4c79-b616-2e1cd122fed2" path="/var/lib/kubelet/pods/e2cc54f2-167c-4c79-b616-2e1cd122fed2/volumes" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.211195 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-656b464f74-h7xjt" event={"ID":"43ecc5a4-8bd1-435c-8514-de23a493ee45","Type":"ContainerStarted","Data":"6c6a712ba769e13853229f0b9d717df6b3d9623398ab2c5aa8dbf73b618bc184"} Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.211244 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.211258 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.552141 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.558369 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-656b464f74-h7xjt" podStartSLOduration=14.558341866 podStartE2EDuration="14.558341866s" podCreationTimestamp="2026-02-03 12:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:32:11.229243579 +0000 UTC m=+1648.752319443" watchObservedRunningTime="2026-02-03 12:32:11.558341866 +0000 UTC m=+1649.081417730" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.559708 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ae9703-401c-41f0-8316-9b485f9d0b29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:11 crc kubenswrapper[4820]: I0203 12:32:11.635435 4820 scope.go:117] "RemoveContainer" containerID="11b065ef0fa2359f41e6b2d388300024c94ef96da0d65df324abd736fd410652" Feb 03 12:32:12 crc kubenswrapper[4820]: I0203 12:32:12.066755 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-fdff74856-dfqrf" podUID="5229e26a-15af-47fd-bb4a-956968711984" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.183:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:32:12 crc kubenswrapper[4820]: I0203 12:32:12.066752 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-fdff74856-dfqrf" podUID="5229e26a-15af-47fd-bb4a-956968711984" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.183:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:32:12 crc kubenswrapper[4820]: I0203 12:32:12.221451 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1de311c0-cca0-4f9f-8897-6c5239e71368","Type":"ContainerStarted","Data":"990183735abcc9b14666a8a68013dc64974d5d08a4cb2c8fe56c79e8fdba92b3"} Feb 03 12:32:12 crc kubenswrapper[4820]: I0203 12:32:12.232308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"526ca20e-1fce-4a0e-bde8-f82c887e5d82","Type":"ContainerStarted","Data":"253c30751d406549ca93dd1f2a4fd527367b8f44a26978c4e3a09d390930d639"} Feb 03 12:32:12 crc kubenswrapper[4820]: I0203 12:32:12.729089 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9dfbf858-g4qlm"] Feb 03 12:32:12 crc kubenswrapper[4820]: I0203 12:32:12.786196 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-9dfbf858-g4qlm"] Feb 03 12:32:12 crc kubenswrapper[4820]: I0203 12:32:12.821939 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:32:13 crc kubenswrapper[4820]: I0203 12:32:13.219295 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ae9703-401c-41f0-8316-9b485f9d0b29" path="/var/lib/kubelet/pods/71ae9703-401c-41f0-8316-9b485f9d0b29/volumes" Feb 03 12:32:13 crc kubenswrapper[4820]: I0203 12:32:13.233534 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:32:13 crc kubenswrapper[4820]: I0203 12:32:13.321980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5j76d" event={"ID":"c0fcb4b0-e763-4309-8097-facfd4782cfb","Type":"ContainerStarted","Data":"4a53813e41c254410fc09ac01ef8c86d93edb0f17f0e4248ee1cd9f77a8c295a"} Feb 03 12:32:13 crc kubenswrapper[4820]: I0203 12:32:13.819035 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:32:14 crc kubenswrapper[4820]: I0203 12:32:14.447604 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"526ca20e-1fce-4a0e-bde8-f82c887e5d82","Type":"ContainerStarted","Data":"01f33736537d1d861f9cbc69acabf2f7c348172743098ad018a815dcacf58cfe"} Feb 03 12:32:14 crc kubenswrapper[4820]: I0203 12:32:14.825051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cc94de-c839-4f2b-82a4-afb000afefe4","Type":"ContainerStarted","Data":"2e9a4fca0ec4a698c4fa2b9d597021a13fa3c2102fda0ad8a3295c173a84d7bb"} Feb 03 12:32:14 crc kubenswrapper[4820]: I0203 12:32:14.918910 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8ff956445-pzzpk"] Feb 03 12:32:14 crc kubenswrapper[4820]: I0203 12:32:14.919311 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8ff956445-pzzpk" podUID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerName="neutron-api" containerID="cri-o://3e5e5ed8899e8013708b2eab378ba7fdc5527de4f0a8305f9da9e2f6237a1f91" gracePeriod=30 Feb 03 12:32:14 crc kubenswrapper[4820]: I0203 12:32:14.919849 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8ff956445-pzzpk" podUID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerName="neutron-httpd" containerID="cri-o://07f53837417fd62c651b388f2bebfa14c05e93cfa77e736364a7637ed8644b12" gracePeriod=30 Feb 03 12:32:14 crc kubenswrapper[4820]: I0203 12:32:14.957467 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7f9964d55c-h2clw"] Feb 03 12:32:14 crc kubenswrapper[4820]: E0203 12:32:14.958658 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ae9703-401c-41f0-8316-9b485f9d0b29" containerName="barbican-api-log" Feb 03 12:32:14 crc kubenswrapper[4820]: I0203 12:32:14.958768 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ae9703-401c-41f0-8316-9b485f9d0b29" containerName="barbican-api-log" Feb 03 12:32:14 crc kubenswrapper[4820]: E0203 12:32:14.958839 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ae9703-401c-41f0-8316-9b485f9d0b29" containerName="barbican-api" Feb 03 12:32:14 crc kubenswrapper[4820]: I0203 12:32:14.958928 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ae9703-401c-41f0-8316-9b485f9d0b29" containerName="barbican-api" Feb 03 12:32:14 crc kubenswrapper[4820]: I0203 12:32:14.961433 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ae9703-401c-41f0-8316-9b485f9d0b29" containerName="barbican-api" Feb 03 12:32:14 crc kubenswrapper[4820]: I0203 12:32:14.970344 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ae9703-401c-41f0-8316-9b485f9d0b29" containerName="barbican-api-log" Feb 03 12:32:14 crc kubenswrapper[4820]: I0203 12:32:14.972951 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.081725 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f9964d55c-h2clw"] Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.140272 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjgqv\" (UniqueName: \"kubernetes.io/projected/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-kube-api-access-rjgqv\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.140383 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-config\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.140461 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-internal-tls-certs\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.140520 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-ovndb-tls-certs\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.140572 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-httpd-config\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.140791 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-public-tls-certs\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.140928 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-combined-ca-bundle\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.184145 4820 generic.go:334] "Generic (PLEG): container finished" podID="0e9efb73-1fc6-4e04-9b3c-89226c1d717c" containerID="06416610bcd6f6e133e06456fbc64e9840bb2b5e012fe6593123ad78d0bef8ba" exitCode=0 Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.440210 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-public-tls-certs\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.440349 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-combined-ca-bundle\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.440482 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjgqv\" (UniqueName: \"kubernetes.io/projected/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-kube-api-access-rjgqv\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.440532 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-config\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.440592 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-internal-tls-certs\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.440626 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-ovndb-tls-certs\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.440671 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-httpd-config\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.448110 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-combined-ca-bundle\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.456801 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-internal-tls-certs\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.466908 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-config\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.481974 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-httpd-config\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.489173 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-8ff956445-pzzpk" podUID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.173:9696/\": read tcp 10.217.0.2:54766->10.217.0.173:9696: read: connection reset by peer" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.489616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-public-tls-certs\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.492282 4820 generic.go:334] "Generic (PLEG): container finished" podID="c0fcb4b0-e763-4309-8097-facfd4782cfb" containerID="4a53813e41c254410fc09ac01ef8c86d93edb0f17f0e4248ee1cd9f77a8c295a" exitCode=0 Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.493004 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-ovndb-tls-certs\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.523922 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" event={"ID":"0e9efb73-1fc6-4e04-9b3c-89226c1d717c","Type":"ContainerDied","Data":"06416610bcd6f6e133e06456fbc64e9840bb2b5e012fe6593123ad78d0bef8ba"} Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.523995 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5j76d" event={"ID":"c0fcb4b0-e763-4309-8097-facfd4782cfb","Type":"ContainerDied","Data":"4a53813e41c254410fc09ac01ef8c86d93edb0f17f0e4248ee1cd9f77a8c295a"} Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.576097 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjgqv\" (UniqueName: \"kubernetes.io/projected/aef62020-c58e-4de0-b1b3-10fdd2b8dc8d-kube-api-access-rjgqv\") pod \"neutron-7f9964d55c-h2clw\" (UID: \"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d\") " pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:15 crc kubenswrapper[4820]: I0203 12:32:15.660723 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:17 crc kubenswrapper[4820]: I0203 12:32:17.674980 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f9964d55c-h2clw"] Feb 03 12:32:17 crc kubenswrapper[4820]: I0203 12:32:17.840573 4820 generic.go:334] "Generic (PLEG): container finished" podID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerID="07f53837417fd62c651b388f2bebfa14c05e93cfa77e736364a7637ed8644b12" exitCode=0 Feb 03 12:32:17 crc kubenswrapper[4820]: I0203 12:32:17.840799 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8ff956445-pzzpk" event={"ID":"156cf9db-e6bb-486e-b3b5-e72d4f99e684","Type":"ContainerDied","Data":"07f53837417fd62c651b388f2bebfa14c05e93cfa77e736364a7637ed8644b12"} Feb 03 12:32:17 crc kubenswrapper[4820]: I0203 12:32:17.859781 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cc94de-c839-4f2b-82a4-afb000afefe4","Type":"ContainerStarted","Data":"2652091cd818301487f5bc148fa2be23d295b0b2d44c8c452ce75479cea9bc01"} Feb 03 12:32:17 crc kubenswrapper[4820]: I0203 12:32:17.903392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" event={"ID":"0e9efb73-1fc6-4e04-9b3c-89226c1d717c","Type":"ContainerStarted","Data":"c3c54b645028b903c154b8fd418e95de43ab9aa46fb7314f0f3decedd34600c4"} Feb 03 12:32:17 crc kubenswrapper[4820]: I0203 12:32:17.905833 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:17 crc kubenswrapper[4820]: I0203 12:32:17.966074 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5j76d" event={"ID":"c0fcb4b0-e763-4309-8097-facfd4782cfb","Type":"ContainerStarted","Data":"618cfceb67ae402aadfac828e372715348e71b2bceffa2d52373046aba9a6cca"} Feb 03 12:32:17 crc kubenswrapper[4820]: I0203 12:32:17.985208 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1de311c0-cca0-4f9f-8897-6c5239e71368","Type":"ContainerStarted","Data":"e7548deb997d2bb6fac8ebc5eb6652761de24b5ace765b2bcfca0c2354fea601"} Feb 03 12:32:18 crc kubenswrapper[4820]: I0203 12:32:18.003980 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5j76d" podStartSLOduration=13.027734317 podStartE2EDuration="20.003956074s" podCreationTimestamp="2026-02-03 12:31:58 +0000 UTC" firstStartedPulling="2026-02-03 12:32:09.911781852 +0000 UTC m=+1647.434857716" lastFinishedPulling="2026-02-03 12:32:16.888003609 +0000 UTC m=+1654.411079473" observedRunningTime="2026-02-03 12:32:17.99528778 +0000 UTC m=+1655.518363654" watchObservedRunningTime="2026-02-03 12:32:18.003956074 +0000 UTC m=+1655.527031938" Feb 03 12:32:18 crc kubenswrapper[4820]: I0203 12:32:18.018772 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f9964d55c-h2clw" event={"ID":"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d","Type":"ContainerStarted","Data":"0b18afa4126cc56b7043d7e254b6dd0dd16a14c167bd2ba055ac93c983da66d0"} Feb 03 12:32:18 crc kubenswrapper[4820]: I0203 12:32:18.030385 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" podStartSLOduration=11.030352847 podStartE2EDuration="11.030352847s" podCreationTimestamp="2026-02-03 12:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:32:17.950936032 +0000 UTC m=+1655.474011916" watchObservedRunningTime="2026-02-03 12:32:18.030352847 +0000 UTC m=+1655.553428721" Feb 03 12:32:18 crc kubenswrapper[4820]: I0203 12:32:18.513095 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjmdv" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" probeResult="failure" output=< Feb 03 12:32:18 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:32:18 crc kubenswrapper[4820]: > Feb 03 12:32:18 crc kubenswrapper[4820]: I0203 12:32:18.731548 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-8ff956445-pzzpk" podUID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.173:9696/\": dial tcp 10.217.0.173:9696: connect: connection refused" Feb 03 12:32:18 crc kubenswrapper[4820]: I0203 12:32:18.779643 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:32:18 crc kubenswrapper[4820]: I0203 12:32:18.779740 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:32:19 crc kubenswrapper[4820]: I0203 12:32:19.338514 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"526ca20e-1fce-4a0e-bde8-f82c887e5d82","Type":"ContainerStarted","Data":"c00ad844e32c489174381c6526e5130ed9386fb05d2b64492830635833bef5b3"} Feb 03 12:32:19 crc kubenswrapper[4820]: I0203 12:32:19.339577 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="526ca20e-1fce-4a0e-bde8-f82c887e5d82" containerName="cinder-api-log" containerID="cri-o://01f33736537d1d861f9cbc69acabf2f7c348172743098ad018a815dcacf58cfe" gracePeriod=30 Feb 03 12:32:19 crc kubenswrapper[4820]: I0203 12:32:19.340228 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 03 12:32:19 crc kubenswrapper[4820]: I0203 12:32:19.340969 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="526ca20e-1fce-4a0e-bde8-f82c887e5d82" containerName="cinder-api" containerID="cri-o://c00ad844e32c489174381c6526e5130ed9386fb05d2b64492830635833bef5b3" gracePeriod=30 Feb 03 12:32:19 crc kubenswrapper[4820]: I0203 12:32:19.433204 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f9964d55c-h2clw" event={"ID":"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d","Type":"ContainerStarted","Data":"19b6ae136ade70080c39949deea4f22ee63eeac2fa56774e5d2eb6d5995b9839"} Feb 03 12:32:19 crc kubenswrapper[4820]: I0203 12:32:19.433273 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f9964d55c-h2clw" event={"ID":"aef62020-c58e-4de0-b1b3-10fdd2b8dc8d","Type":"ContainerStarted","Data":"f53136dfca665c4fb69eb585cfa92165c67317716a7886536a5d0d09fb5706c8"} Feb 03 12:32:19 crc kubenswrapper[4820]: I0203 12:32:19.434033 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:32:19 crc kubenswrapper[4820]: I0203 12:32:19.465584 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cc94de-c839-4f2b-82a4-afb000afefe4","Type":"ContainerStarted","Data":"32a93833397a8ecfb3ecbac5d36f50e8f56e5141fe2e629779d43d46e9671f78"} Feb 03 12:32:19 crc kubenswrapper[4820]: I0203 12:32:19.494187 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1de311c0-cca0-4f9f-8897-6c5239e71368","Type":"ContainerStarted","Data":"4eb97649a133b19e72a5d63d7c682b7aa79ce3621229f868a2c7f02ac7512201"} Feb 03 12:32:20 crc kubenswrapper[4820]: I0203 12:32:19.946267 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5j76d" podUID="c0fcb4b0-e763-4309-8097-facfd4782cfb" containerName="registry-server" probeResult="failure" output=< Feb 03 12:32:20 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:32:20 crc kubenswrapper[4820]: > Feb 03 12:32:20 crc kubenswrapper[4820]: I0203 12:32:19.947651 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7f9964d55c-h2clw" podStartSLOduration=5.947640191 podStartE2EDuration="5.947640191s" podCreationTimestamp="2026-02-03 12:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:32:19.941739512 +0000 UTC m=+1657.464815366" watchObservedRunningTime="2026-02-03 12:32:19.947640191 +0000 UTC m=+1657.470716055" Feb 03 12:32:20 crc kubenswrapper[4820]: I0203 12:32:19.957247 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=11.957213441 podStartE2EDuration="11.957213441s" podCreationTimestamp="2026-02-03 12:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:32:19.432462709 +0000 UTC m=+1656.955538573" watchObservedRunningTime="2026-02-03 12:32:19.957213441 +0000 UTC m=+1657.480289335" Feb 03 12:32:20 crc kubenswrapper[4820]: I0203 12:32:20.122877 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=10.877670632 podStartE2EDuration="13.122844822s" podCreationTimestamp="2026-02-03 12:32:07 +0000 UTC" firstStartedPulling="2026-02-03 12:32:11.09707454 +0000 UTC m=+1648.620150404" lastFinishedPulling="2026-02-03 12:32:13.342248729 +0000 UTC m=+1650.865324594" observedRunningTime="2026-02-03 12:32:20.036993329 +0000 UTC m=+1657.560069193" watchObservedRunningTime="2026-02-03 12:32:20.122844822 +0000 UTC m=+1657.645920686" Feb 03 12:32:20 crc kubenswrapper[4820]: I0203 12:32:20.601998 4820 generic.go:334] "Generic (PLEG): container finished" podID="526ca20e-1fce-4a0e-bde8-f82c887e5d82" containerID="01f33736537d1d861f9cbc69acabf2f7c348172743098ad018a815dcacf58cfe" exitCode=143 Feb 03 12:32:20 crc kubenswrapper[4820]: I0203 12:32:20.602798 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"526ca20e-1fce-4a0e-bde8-f82c887e5d82","Type":"ContainerDied","Data":"01f33736537d1d861f9cbc69acabf2f7c348172743098ad018a815dcacf58cfe"} Feb 03 12:32:21 crc kubenswrapper[4820]: I0203 12:32:21.788659 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cc94de-c839-4f2b-82a4-afb000afefe4","Type":"ContainerStarted","Data":"029a5e8d401927e1b2753d9697409b68811d0dbc9da28b220102c8b50000c9a6"} Feb 03 12:32:22 crc kubenswrapper[4820]: I0203 12:32:22.989731 4820 generic.go:334] "Generic (PLEG): container finished" podID="526ca20e-1fce-4a0e-bde8-f82c887e5d82" containerID="c00ad844e32c489174381c6526e5130ed9386fb05d2b64492830635833bef5b3" exitCode=0 Feb 03 12:32:22 crc kubenswrapper[4820]: I0203 12:32:22.990073 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"526ca20e-1fce-4a0e-bde8-f82c887e5d82","Type":"ContainerDied","Data":"c00ad844e32c489174381c6526e5130ed9386fb05d2b64492830635833bef5b3"} Feb 03 12:32:23 crc kubenswrapper[4820]: I0203 12:32:23.025712 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 03 12:32:23 crc kubenswrapper[4820]: I0203 12:32:23.080461 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="1de311c0-cca0-4f9f-8897-6c5239e71368" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.186:8080/\": dial tcp 10.217.0.186:8080: connect: connection refused" Feb 03 12:32:23 crc kubenswrapper[4820]: I0203 12:32:23.252174 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:32:24 crc kubenswrapper[4820]: I0203 12:32:24.261339 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-dxczn"] Feb 03 12:32:24 crc kubenswrapper[4820]: I0203 12:32:24.262257 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" podUID="66a7d0be-3243-4744-898c-b87b5f91c620" containerName="dnsmasq-dns" containerID="cri-o://fc188969c67459fbee4a335908c60772eb8f45e5828125a3adefab81e7450762" gracePeriod=10 Feb 03 12:32:24 crc kubenswrapper[4820]: I0203 12:32:24.296698 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:32:24 crc kubenswrapper[4820]: E0203 12:32:24.300593 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:32:24 crc kubenswrapper[4820]: I0203 12:32:24.373381 4820 generic.go:334] "Generic (PLEG): container finished" podID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerID="c22975ba3ab084f0050a4631f4a0020a8a20e782596e29e66b6e36290ae66cee" exitCode=137 Feb 03 12:32:24 crc kubenswrapper[4820]: I0203 12:32:24.373446 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdc8588b4-jtjr8" event={"ID":"17c371f7-f032-4444-8d4b-1183a224c7b0","Type":"ContainerDied","Data":"c22975ba3ab084f0050a4631f4a0020a8a20e782596e29e66b6e36290ae66cee"} Feb 03 12:32:24 crc kubenswrapper[4820]: I0203 12:32:24.392210 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"526ca20e-1fce-4a0e-bde8-f82c887e5d82","Type":"ContainerDied","Data":"253c30751d406549ca93dd1f2a4fd527367b8f44a26978c4e3a09d390930d639"} Feb 03 12:32:24 crc kubenswrapper[4820]: I0203 12:32:24.392270 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="253c30751d406549ca93dd1f2a4fd527367b8f44a26978c4e3a09d390930d639" Feb 03 12:32:24 crc kubenswrapper[4820]: I0203 12:32:24.561371 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 12:32:25 crc kubenswrapper[4820]: E0203 12:32:24.629833 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod308562dd_6078_4c1c_a4e0_c01a60a2d81d.slice/crio-conmon-a17b9aafc2fe0ed01eea3ac2324d99cc7383038c9d824a7384a0dcac0217d20f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod308562dd_6078_4c1c_a4e0_c01a60a2d81d.slice/crio-a17b9aafc2fe0ed01eea3ac2324d99cc7383038c9d824a7384a0dcac0217d20f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17c371f7_f032_4444_8d4b_1183a224c7b0.slice/crio-conmon-c22975ba3ab084f0050a4631f4a0020a8a20e782596e29e66b6e36290ae66cee.scope\": RecentStats: unable to find data in memory cache]" Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.150308 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/526ca20e-1fce-4a0e-bde8-f82c887e5d82-etc-machine-id\") pod \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.150806 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-combined-ca-bundle\") pod \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.150841 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/526ca20e-1fce-4a0e-bde8-f82c887e5d82-logs\") pod \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.150941 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-config-data-custom\") pod \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.151008 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-config-data\") pod \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.151034 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbsb9\" (UniqueName: \"kubernetes.io/projected/526ca20e-1fce-4a0e-bde8-f82c887e5d82-kube-api-access-jbsb9\") pod \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.151132 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-scripts\") pod \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\" (UID: \"526ca20e-1fce-4a0e-bde8-f82c887e5d82\") " Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.179285 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/526ca20e-1fce-4a0e-bde8-f82c887e5d82-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "526ca20e-1fce-4a0e-bde8-f82c887e5d82" (UID: "526ca20e-1fce-4a0e-bde8-f82c887e5d82"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.181590 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/526ca20e-1fce-4a0e-bde8-f82c887e5d82-logs" (OuterVolumeSpecName: "logs") pod "526ca20e-1fce-4a0e-bde8-f82c887e5d82" (UID: "526ca20e-1fce-4a0e-bde8-f82c887e5d82"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.205343 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "526ca20e-1fce-4a0e-bde8-f82c887e5d82" (UID: "526ca20e-1fce-4a0e-bde8-f82c887e5d82"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.252133 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/526ca20e-1fce-4a0e-bde8-f82c887e5d82-kube-api-access-jbsb9" (OuterVolumeSpecName: "kube-api-access-jbsb9") pod "526ca20e-1fce-4a0e-bde8-f82c887e5d82" (UID: "526ca20e-1fce-4a0e-bde8-f82c887e5d82"). InnerVolumeSpecName "kube-api-access-jbsb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.395125 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-scripts" (OuterVolumeSpecName: "scripts") pod "526ca20e-1fce-4a0e-bde8-f82c887e5d82" (UID: "526ca20e-1fce-4a0e-bde8-f82c887e5d82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.885671 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.885732 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/526ca20e-1fce-4a0e-bde8-f82c887e5d82-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.885749 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/526ca20e-1fce-4a0e-bde8-f82c887e5d82-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.885762 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.885780 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbsb9\" (UniqueName: \"kubernetes.io/projected/526ca20e-1fce-4a0e-bde8-f82c887e5d82-kube-api-access-jbsb9\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.952050 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "526ca20e-1fce-4a0e-bde8-f82c887e5d82" (UID: "526ca20e-1fce-4a0e-bde8-f82c887e5d82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:25 crc kubenswrapper[4820]: I0203 12:32:25.989976 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:26 crc kubenswrapper[4820]: I0203 12:32:26.072138 4820 generic.go:334] "Generic (PLEG): container finished" podID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerID="a17b9aafc2fe0ed01eea3ac2324d99cc7383038c9d824a7384a0dcac0217d20f" exitCode=137 Feb 03 12:32:26 crc kubenswrapper[4820]: I0203 12:32:26.072521 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b4df5bdd-tdb9h" event={"ID":"308562dd-6078-4c1c-a4e0-c01a60a2d81d","Type":"ContainerDied","Data":"a17b9aafc2fe0ed01eea3ac2324d99cc7383038c9d824a7384a0dcac0217d20f"} Feb 03 12:32:26 crc kubenswrapper[4820]: I0203 12:32:26.103635 4820 generic.go:334] "Generic (PLEG): container finished" podID="66a7d0be-3243-4744-898c-b87b5f91c620" containerID="fc188969c67459fbee4a335908c60772eb8f45e5828125a3adefab81e7450762" exitCode=0 Feb 03 12:32:26 crc kubenswrapper[4820]: I0203 12:32:26.103746 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 12:32:26 crc kubenswrapper[4820]: I0203 12:32:26.119057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" event={"ID":"66a7d0be-3243-4744-898c-b87b5f91c620","Type":"ContainerDied","Data":"fc188969c67459fbee4a335908c60772eb8f45e5828125a3adefab81e7450762"} Feb 03 12:32:26 crc kubenswrapper[4820]: I0203 12:32:26.169108 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-config-data" (OuterVolumeSpecName: "config-data") pod "526ca20e-1fce-4a0e-bde8-f82c887e5d82" (UID: "526ca20e-1fce-4a0e-bde8-f82c887e5d82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:26 crc kubenswrapper[4820]: I0203 12:32:26.237370 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526ca20e-1fce-4a0e-bde8-f82c887e5d82-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:26 crc kubenswrapper[4820]: I0203 12:32:26.979027 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:32:27 crc kubenswrapper[4820]: I0203 12:32:27.800841 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" Feb 03 12:32:28 crc kubenswrapper[4820]: I0203 12:32:28.987332 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="526ca20e-1fce-4a0e-bde8-f82c887e5d82" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.188:8776/healthcheck\": dial tcp 10.217.0.188:8776: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 03 12:32:28 crc kubenswrapper[4820]: I0203 12:32:28.995750 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="1e865214-494f-4a49-a2e6-2b7316f30a92" containerName="galera" probeResult="failure" output="command timed out" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.002262 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-ovsdbserver-nb\") pod \"66a7d0be-3243-4744-898c-b87b5f91c620\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.012357 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-dns-swift-storage-0\") pod \"66a7d0be-3243-4744-898c-b87b5f91c620\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.012470 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-config\") pod \"66a7d0be-3243-4744-898c-b87b5f91c620\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.012599 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-dns-svc\") pod \"66a7d0be-3243-4744-898c-b87b5f91c620\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.012625 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-ovsdbserver-sb\") pod \"66a7d0be-3243-4744-898c-b87b5f91c620\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.012726 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c22gj\" (UniqueName: \"kubernetes.io/projected/66a7d0be-3243-4744-898c-b87b5f91c620-kube-api-access-c22gj\") pod \"66a7d0be-3243-4744-898c-b87b5f91c620\" (UID: \"66a7d0be-3243-4744-898c-b87b5f91c620\") " Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.033661 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="1de311c0-cca0-4f9f-8897-6c5239e71368" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.186:8080/\": dial tcp 10.217.0.186:8080: connect: connection refused" Feb 03 12:32:29 crc kubenswrapper[4820]: E0203 12:32:29.043306 4820 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.242s" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.043410 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b4df5bdd-tdb9h" event={"ID":"308562dd-6078-4c1c-a4e0-c01a60a2d81d","Type":"ContainerStarted","Data":"6ecd1021da966b26d0ebdc213f4c8379ce99f2bdd3ff3973594574161725d11d"} Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.043438 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.043453 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" event={"ID":"66a7d0be-3243-4744-898c-b87b5f91c620","Type":"ContainerDied","Data":"39f219cb9f642e5b8ba79289c835c8c03c9604dac02b3cabb7f06d4bff441396"} Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.043470 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.043493 4820 scope.go:117] "RemoveContainer" containerID="fc188969c67459fbee4a335908c60772eb8f45e5828125a3adefab81e7450762" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.048301 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:32:29 crc kubenswrapper[4820]: E0203 12:32:29.048830 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a7d0be-3243-4744-898c-b87b5f91c620" containerName="init" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.048849 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a7d0be-3243-4744-898c-b87b5f91c620" containerName="init" Feb 03 12:32:29 crc kubenswrapper[4820]: E0203 12:32:29.048869 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526ca20e-1fce-4a0e-bde8-f82c887e5d82" containerName="cinder-api" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.048877 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="526ca20e-1fce-4a0e-bde8-f82c887e5d82" containerName="cinder-api" Feb 03 12:32:29 crc kubenswrapper[4820]: E0203 12:32:29.048928 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a7d0be-3243-4744-898c-b87b5f91c620" containerName="dnsmasq-dns" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.048935 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a7d0be-3243-4744-898c-b87b5f91c620" containerName="dnsmasq-dns" Feb 03 12:32:29 crc kubenswrapper[4820]: E0203 12:32:29.048947 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526ca20e-1fce-4a0e-bde8-f82c887e5d82" containerName="cinder-api-log" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.048953 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="526ca20e-1fce-4a0e-bde8-f82c887e5d82" containerName="cinder-api-log" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.049227 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="526ca20e-1fce-4a0e-bde8-f82c887e5d82" containerName="cinder-api-log" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.049257 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a7d0be-3243-4744-898c-b87b5f91c620" containerName="dnsmasq-dns" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.049267 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="526ca20e-1fce-4a0e-bde8-f82c887e5d82" containerName="cinder-api" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.050595 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.059693 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.061250 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.067200 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a7d0be-3243-4744-898c-b87b5f91c620-kube-api-access-c22gj" (OuterVolumeSpecName: "kube-api-access-c22gj") pod "66a7d0be-3243-4744-898c-b87b5f91c620" (UID: "66a7d0be-3243-4744-898c-b87b5f91c620"). InnerVolumeSpecName "kube-api-access-c22gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.068004 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjmdv" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" probeResult="failure" output=< Feb 03 12:32:29 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:32:29 crc kubenswrapper[4820]: > Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.070290 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.088647 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.118505 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c22gj\" (UniqueName: \"kubernetes.io/projected/66a7d0be-3243-4744-898c-b87b5f91c620-kube-api-access-c22gj\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.183186 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-config" (OuterVolumeSpecName: "config") pod "66a7d0be-3243-4744-898c-b87b5f91c620" (UID: "66a7d0be-3243-4744-898c-b87b5f91c620"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.203818 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "66a7d0be-3243-4744-898c-b87b5f91c620" (UID: "66a7d0be-3243-4744-898c-b87b5f91c620"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.218403 4820 generic.go:334] "Generic (PLEG): container finished" podID="e460cf1d-b4e8-4bc6-89df-3fa68d972a33" containerID="d00c04ba739dbc78c13a05b589079c47d3cbbd59bf2cf2388cebdc2e69a078f1" exitCode=137 Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.228592 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.228657 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gbnz\" (UniqueName: \"kubernetes.io/projected/32b101cf-4d79-44f8-a591-dd5c74df5af6-kube-api-access-9gbnz\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.228779 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-scripts\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.228919 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.228948 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-config-data\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.229019 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32b101cf-4d79-44f8-a591-dd5c74df5af6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.229055 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.229142 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-config-data-custom\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.229189 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32b101cf-4d79-44f8-a591-dd5c74df5af6-logs\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.230801 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.230836 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.248441 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="526ca20e-1fce-4a0e-bde8-f82c887e5d82" path="/var/lib/kubelet/pods/526ca20e-1fce-4a0e-bde8-f82c887e5d82/volumes" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.302303 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "66a7d0be-3243-4744-898c-b87b5f91c620" (UID: "66a7d0be-3243-4744-898c-b87b5f91c620"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.351267 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.351773 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gbnz\" (UniqueName: \"kubernetes.io/projected/32b101cf-4d79-44f8-a591-dd5c74df5af6-kube-api-access-9gbnz\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.351933 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-scripts\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.352155 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.352196 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-config-data\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.352289 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32b101cf-4d79-44f8-a591-dd5c74df5af6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.352355 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.352481 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-config-data-custom\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.352530 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32b101cf-4d79-44f8-a591-dd5c74df5af6-logs\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.358722 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.363083 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32b101cf-4d79-44f8-a591-dd5c74df5af6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.365735 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "66a7d0be-3243-4744-898c-b87b5f91c620" (UID: "66a7d0be-3243-4744-898c-b87b5f91c620"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.464879 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.474840 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6ccd68b7f-9xjs9" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.474904 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78bff7b94c-t49mw" event={"ID":"e460cf1d-b4e8-4bc6-89df-3fa68d972a33","Type":"ContainerDied","Data":"d00c04ba739dbc78c13a05b589079c47d3cbbd59bf2cf2388cebdc2e69a078f1"} Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.474941 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdc8588b4-jtjr8" event={"ID":"17c371f7-f032-4444-8d4b-1183a224c7b0","Type":"ContainerStarted","Data":"258627a9dac8c607d158f7b60718c41b0a56b0d1a371bcf6c8e5e827f34acb59"} Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.475063 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.579246 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.705789 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5j76d"] Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.813169 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "66a7d0be-3243-4744-898c-b87b5f91c620" (UID: "66a7d0be-3243-4744-898c-b87b5f91c620"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.817649 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32b101cf-4d79-44f8-a591-dd5c74df5af6-logs\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.823686 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.827825 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-scripts\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.833376 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-config-data-custom\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.847363 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gbnz\" (UniqueName: \"kubernetes.io/projected/32b101cf-4d79-44f8-a591-dd5c74df5af6-kube-api-access-9gbnz\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.850410 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-config-data\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.855854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.877562 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32b101cf-4d79-44f8-a591-dd5c74df5af6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"32b101cf-4d79-44f8-a591-dd5c74df5af6\") " pod="openstack/cinder-api-0" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.886604 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/66a7d0be-3243-4744-898c-b87b5f91c620-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.906860 4820 scope.go:117] "RemoveContainer" containerID="025996c10336f09c8aa02b7f5465a966499fe35426a17762a34c6aba8500b55b" Feb 03 12:32:29 crc kubenswrapper[4820]: I0203 12:32:29.959571 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.020774 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-logs\") pod \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.021044 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-combined-ca-bundle\") pod \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.021108 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-config-data\") pod \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.021127 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-config-data-custom\") pod \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.021226 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x6xm\" (UniqueName: \"kubernetes.io/projected/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-kube-api-access-9x6xm\") pod \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\" (UID: \"e460cf1d-b4e8-4bc6-89df-3fa68d972a33\") " Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.024451 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-logs" (OuterVolumeSpecName: "logs") pod "e460cf1d-b4e8-4bc6-89df-3fa68d972a33" (UID: "e460cf1d-b4e8-4bc6-89df-3fa68d972a33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.034116 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e460cf1d-b4e8-4bc6-89df-3fa68d972a33" (UID: "e460cf1d-b4e8-4bc6-89df-3fa68d972a33"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.044198 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-kube-api-access-9x6xm" (OuterVolumeSpecName: "kube-api-access-9x6xm") pod "e460cf1d-b4e8-4bc6-89df-3fa68d972a33" (UID: "e460cf1d-b4e8-4bc6-89df-3fa68d972a33"). InnerVolumeSpecName "kube-api-access-9x6xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.080203 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e460cf1d-b4e8-4bc6-89df-3fa68d972a33" (UID: "e460cf1d-b4e8-4bc6-89df-3fa68d972a33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.085007 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-dxczn"] Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.112241 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-dxczn"] Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.133705 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.134805 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.134839 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.134854 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.134863 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x6xm\" (UniqueName: \"kubernetes.io/projected/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-kube-api-access-9x6xm\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.151970 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 03 12:32:30 crc kubenswrapper[4820]: E0203 12:32:30.152634 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e460cf1d-b4e8-4bc6-89df-3fa68d972a33" containerName="barbican-worker" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.152654 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e460cf1d-b4e8-4bc6-89df-3fa68d972a33" containerName="barbican-worker" Feb 03 12:32:30 crc kubenswrapper[4820]: E0203 12:32:30.152688 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e460cf1d-b4e8-4bc6-89df-3fa68d972a33" containerName="barbican-worker-log" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.152695 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e460cf1d-b4e8-4bc6-89df-3fa68d972a33" containerName="barbican-worker-log" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.152941 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e460cf1d-b4e8-4bc6-89df-3fa68d972a33" containerName="barbican-worker-log" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.152960 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e460cf1d-b4e8-4bc6-89df-3fa68d972a33" containerName="barbican-worker" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.153854 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.162645 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.163463 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.172213 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mxn2s" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.231198 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.238037 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca5fea5-c1a3-4788-adc6-c146025f00f3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " pod="openstack/openstackclient" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.238127 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkmfk\" (UniqueName: \"kubernetes.io/projected/6ca5fea5-c1a3-4788-adc6-c146025f00f3-kube-api-access-pkmfk\") pod \"openstackclient\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " pod="openstack/openstackclient" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.238317 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ca5fea5-c1a3-4788-adc6-c146025f00f3-openstack-config-secret\") pod \"openstackclient\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " pod="openstack/openstackclient" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.238422 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ca5fea5-c1a3-4788-adc6-c146025f00f3-openstack-config\") pod \"openstackclient\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " pod="openstack/openstackclient" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.293746 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-config-data" (OuterVolumeSpecName: "config-data") pod "e460cf1d-b4e8-4bc6-89df-3fa68d972a33" (UID: "e460cf1d-b4e8-4bc6-89df-3fa68d972a33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.351405 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ca5fea5-c1a3-4788-adc6-c146025f00f3-openstack-config-secret\") pod \"openstackclient\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " pod="openstack/openstackclient" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.351526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ca5fea5-c1a3-4788-adc6-c146025f00f3-openstack-config\") pod \"openstackclient\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " pod="openstack/openstackclient" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.351617 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca5fea5-c1a3-4788-adc6-c146025f00f3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " pod="openstack/openstackclient" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.351678 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkmfk\" (UniqueName: \"kubernetes.io/projected/6ca5fea5-c1a3-4788-adc6-c146025f00f3-kube-api-access-pkmfk\") pod \"openstackclient\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " pod="openstack/openstackclient" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.351859 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e460cf1d-b4e8-4bc6-89df-3fa68d972a33-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.354753 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ca5fea5-c1a3-4788-adc6-c146025f00f3-openstack-config\") pod \"openstackclient\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " pod="openstack/openstackclient" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.360730 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca5fea5-c1a3-4788-adc6-c146025f00f3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " pod="openstack/openstackclient" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.372264 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ca5fea5-c1a3-4788-adc6-c146025f00f3-openstack-config-secret\") pod \"openstackclient\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " pod="openstack/openstackclient" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.403648 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkmfk\" (UniqueName: \"kubernetes.io/projected/6ca5fea5-c1a3-4788-adc6-c146025f00f3-kube-api-access-pkmfk\") pod \"openstackclient\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " pod="openstack/openstackclient" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.406221 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78bff7b94c-t49mw" event={"ID":"e460cf1d-b4e8-4bc6-89df-3fa68d972a33","Type":"ContainerDied","Data":"f56dfdf8e101ae94217147a53ebaddc63716c1d85322da8e6346beb52226fa3c"} Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.406297 4820 scope.go:117] "RemoveContainer" containerID="d00c04ba739dbc78c13a05b589079c47d3cbbd59bf2cf2388cebdc2e69a078f1" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.406467 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78bff7b94c-t49mw" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.469790 4820 generic.go:334] "Generic (PLEG): container finished" podID="14f188d1-7883-471d-9564-01f405548b98" containerID="da3e67a2f5780c6fcce9756b2c251c7f8184a0ca6f1d9afa680f463f6a8da537" exitCode=137 Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.469928 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" event={"ID":"14f188d1-7883-471d-9564-01f405548b98","Type":"ContainerDied","Data":"da3e67a2f5780c6fcce9756b2c251c7f8184a0ca6f1d9afa680f463f6a8da537"} Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.515809 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cc94de-c839-4f2b-82a4-afb000afefe4","Type":"ContainerStarted","Data":"6ab1d94771e0b90907febe52bfa187eec587f14af2703385974005308a77374f"} Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.530775 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.531101 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.552144 4820 scope.go:117] "RemoveContainer" containerID="89bde6a3f663052b940fa07cf6f5ed3ebd395c6ab92a7fe7da910acf4464c836" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.560656 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-combined-ca-bundle\") pod \"14f188d1-7883-471d-9564-01f405548b98\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.560832 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f188d1-7883-471d-9564-01f405548b98-logs\") pod \"14f188d1-7883-471d-9564-01f405548b98\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.561057 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z2rb\" (UniqueName: \"kubernetes.io/projected/14f188d1-7883-471d-9564-01f405548b98-kube-api-access-6z2rb\") pod \"14f188d1-7883-471d-9564-01f405548b98\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.561333 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-config-data-custom\") pod \"14f188d1-7883-471d-9564-01f405548b98\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.561541 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-config-data\") pod \"14f188d1-7883-471d-9564-01f405548b98\" (UID: \"14f188d1-7883-471d-9564-01f405548b98\") " Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.563749 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14f188d1-7883-471d-9564-01f405548b98-logs" (OuterVolumeSpecName: "logs") pod "14f188d1-7883-471d-9564-01f405548b98" (UID: "14f188d1-7883-471d-9564-01f405548b98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.578860 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f188d1-7883-471d-9564-01f405548b98-kube-api-access-6z2rb" (OuterVolumeSpecName: "kube-api-access-6z2rb") pod "14f188d1-7883-471d-9564-01f405548b98" (UID: "14f188d1-7883-471d-9564-01f405548b98"). InnerVolumeSpecName "kube-api-access-6z2rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.583028 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "14f188d1-7883-471d-9564-01f405548b98" (UID: "14f188d1-7883-471d-9564-01f405548b98"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.587298 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.588724 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.650480 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14f188d1-7883-471d-9564-01f405548b98" (UID: "14f188d1-7883-471d-9564-01f405548b98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.654288 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.669529 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.669580 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f188d1-7883-471d-9564-01f405548b98-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.669592 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z2rb\" (UniqueName: \"kubernetes.io/projected/14f188d1-7883-471d-9564-01f405548b98-kube-api-access-6z2rb\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.669606 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.750307 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-78bff7b94c-t49mw"] Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.821700 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-config-data" (OuterVolumeSpecName: "config-data") pod "14f188d1-7883-471d-9564-01f405548b98" (UID: "14f188d1-7883-471d-9564-01f405548b98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.873199 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.8856260769999995 podStartE2EDuration="20.873161514s" podCreationTimestamp="2026-02-03 12:32:10 +0000 UTC" firstStartedPulling="2026-02-03 12:32:13.34490973 +0000 UTC m=+1650.867985594" lastFinishedPulling="2026-02-03 12:32:26.332445167 +0000 UTC m=+1663.855521031" observedRunningTime="2026-02-03 12:32:30.56266352 +0000 UTC m=+1668.085739404" watchObservedRunningTime="2026-02-03 12:32:30.873161514 +0000 UTC m=+1668.396237388" Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.877008 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-78bff7b94c-t49mw"] Feb 03 12:32:30 crc kubenswrapper[4820]: I0203 12:32:30.879837 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f188d1-7883-471d-9564-01f405548b98-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:31 crc kubenswrapper[4820]: I0203 12:32:30.904756 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85ff748b95-dxczn" podUID="66a7d0be-3243-4744-898c-b87b5f91c620" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.178:5353: i/o timeout" Feb 03 12:32:31 crc kubenswrapper[4820]: I0203 12:32:31.858059 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a7d0be-3243-4744-898c-b87b5f91c620" path="/var/lib/kubelet/pods/66a7d0be-3243-4744-898c-b87b5f91c620/volumes" Feb 03 12:32:31 crc kubenswrapper[4820]: I0203 12:32:31.876521 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e460cf1d-b4e8-4bc6-89df-3fa68d972a33" path="/var/lib/kubelet/pods/e460cf1d-b4e8-4bc6-89df-3fa68d972a33/volumes" Feb 03 12:32:31 crc kubenswrapper[4820]: I0203 12:32:31.877581 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 03 12:32:31 crc kubenswrapper[4820]: E0203 12:32:31.912463 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f188d1-7883-471d-9564-01f405548b98" containerName="barbican-keystone-listener" Feb 03 12:32:31 crc kubenswrapper[4820]: I0203 12:32:31.912527 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f188d1-7883-471d-9564-01f405548b98" containerName="barbican-keystone-listener" Feb 03 12:32:31 crc kubenswrapper[4820]: E0203 12:32:31.912549 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f188d1-7883-471d-9564-01f405548b98" containerName="barbican-keystone-listener-log" Feb 03 12:32:31 crc kubenswrapper[4820]: I0203 12:32:31.912559 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f188d1-7883-471d-9564-01f405548b98" containerName="barbican-keystone-listener-log" Feb 03 12:32:31 crc kubenswrapper[4820]: I0203 12:32:31.914221 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f188d1-7883-471d-9564-01f405548b98" containerName="barbican-keystone-listener" Feb 03 12:32:31 crc kubenswrapper[4820]: I0203 12:32:31.914258 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f188d1-7883-471d-9564-01f405548b98" containerName="barbican-keystone-listener-log" Feb 03 12:32:31 crc kubenswrapper[4820]: I0203 12:32:31.938787 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 03 12:32:31 crc kubenswrapper[4820]: I0203 12:32:31.938994 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 12:32:31 crc kubenswrapper[4820]: E0203 12:32:31.977983 4820 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 03 12:32:31 crc kubenswrapper[4820]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_6ca5fea5-c1a3-4788-adc6-c146025f00f3_0(ba7a66335bc9c22b2fcc00ca6eb944c40c74c63b44105fcbf3fa6b6ed8641e1c): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ba7a66335bc9c22b2fcc00ca6eb944c40c74c63b44105fcbf3fa6b6ed8641e1c" Netns:"/var/run/netns/52e390d4-67de-4a3e-8ed8-74cf4e556d47" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=ba7a66335bc9c22b2fcc00ca6eb944c40c74c63b44105fcbf3fa6b6ed8641e1c;K8S_POD_UID=6ca5fea5-c1a3-4788-adc6-c146025f00f3" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/6ca5fea5-c1a3-4788-adc6-c146025f00f3]: expected pod UID "6ca5fea5-c1a3-4788-adc6-c146025f00f3" but got "bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e" from Kube API Feb 03 12:32:31 crc kubenswrapper[4820]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 03 12:32:31 crc kubenswrapper[4820]: > Feb 03 12:32:31 crc kubenswrapper[4820]: E0203 12:32:31.978118 4820 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 03 12:32:31 crc kubenswrapper[4820]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_6ca5fea5-c1a3-4788-adc6-c146025f00f3_0(ba7a66335bc9c22b2fcc00ca6eb944c40c74c63b44105fcbf3fa6b6ed8641e1c): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ba7a66335bc9c22b2fcc00ca6eb944c40c74c63b44105fcbf3fa6b6ed8641e1c" Netns:"/var/run/netns/52e390d4-67de-4a3e-8ed8-74cf4e556d47" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=ba7a66335bc9c22b2fcc00ca6eb944c40c74c63b44105fcbf3fa6b6ed8641e1c;K8S_POD_UID=6ca5fea5-c1a3-4788-adc6-c146025f00f3" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/6ca5fea5-c1a3-4788-adc6-c146025f00f3]: expected pod UID "6ca5fea5-c1a3-4788-adc6-c146025f00f3" but got "bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e" from Kube API Feb 03 12:32:31 crc kubenswrapper[4820]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 03 12:32:31 crc kubenswrapper[4820]: > pod="openstack/openstackclient" Feb 03 12:32:31 crc kubenswrapper[4820]: I0203 12:32:31.979029 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" Feb 03 12:32:31 crc kubenswrapper[4820]: I0203 12:32:31.979884 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b" event={"ID":"14f188d1-7883-471d-9564-01f405548b98","Type":"ContainerDied","Data":"12097bb6ec02a76825f6cb0c55953679baebdf294d5364a3de4de479172834a3"} Feb 03 12:32:31 crc kubenswrapper[4820]: I0203 12:32:31.980126 4820 scope.go:117] "RemoveContainer" containerID="da3e67a2f5780c6fcce9756b2c251c7f8184a0ca6f1d9afa680f463f6a8da537" Feb 03 12:32:31 crc kubenswrapper[4820]: I0203 12:32:31.981225 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5j76d" podUID="c0fcb4b0-e763-4309-8097-facfd4782cfb" containerName="registry-server" containerID="cri-o://618cfceb67ae402aadfac828e372715348e71b2bceffa2d52373046aba9a6cca" gracePeriod=2 Feb 03 12:32:32 crc kubenswrapper[4820]: I0203 12:32:32.807030 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e-openstack-config\") pod \"openstackclient\" (UID: \"bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e\") " pod="openstack/openstackclient" Feb 03 12:32:32 crc kubenswrapper[4820]: I0203 12:32:32.807657 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmp6p\" (UniqueName: \"kubernetes.io/projected/bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e-kube-api-access-lmp6p\") pod \"openstackclient\" (UID: \"bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e\") " pod="openstack/openstackclient" Feb 03 12:32:32 crc kubenswrapper[4820]: I0203 12:32:32.807731 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e-openstack-config-secret\") pod \"openstackclient\" (UID: \"bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e\") " pod="openstack/openstackclient" Feb 03 12:32:32 crc kubenswrapper[4820]: I0203 12:32:32.807785 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e\") " pod="openstack/openstackclient" Feb 03 12:32:32 crc kubenswrapper[4820]: I0203 12:32:32.826272 4820 scope.go:117] "RemoveContainer" containerID="d9d04245ce3f9776873e9f2fe6a08b6f0a40b6b784cfb077d1e3186d748551e2" Feb 03 12:32:32 crc kubenswrapper[4820]: I0203 12:32:32.912312 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmp6p\" (UniqueName: \"kubernetes.io/projected/bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e-kube-api-access-lmp6p\") pod \"openstackclient\" (UID: \"bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e\") " pod="openstack/openstackclient" Feb 03 12:32:32 crc kubenswrapper[4820]: I0203 12:32:32.912419 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e-openstack-config-secret\") pod \"openstackclient\" (UID: \"bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e\") " pod="openstack/openstackclient" Feb 03 12:32:32 crc kubenswrapper[4820]: I0203 12:32:32.912485 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e\") " pod="openstack/openstackclient" Feb 03 12:32:32 crc kubenswrapper[4820]: I0203 12:32:32.912670 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e-openstack-config\") pod \"openstackclient\" (UID: \"bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e\") " pod="openstack/openstackclient" Feb 03 12:32:32 crc kubenswrapper[4820]: I0203 12:32:32.931113 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e-openstack-config\") pod \"openstackclient\" (UID: \"bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e\") " pod="openstack/openstackclient" Feb 03 12:32:32 crc kubenswrapper[4820]: I0203 12:32:32.957028 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b"] Feb 03 12:32:32 crc kubenswrapper[4820]: I0203 12:32:32.965080 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmp6p\" (UniqueName: \"kubernetes.io/projected/bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e-kube-api-access-lmp6p\") pod \"openstackclient\" (UID: \"bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e\") " pod="openstack/openstackclient" Feb 03 12:32:32 crc kubenswrapper[4820]: I0203 12:32:32.984879 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e\") " pod="openstack/openstackclient" Feb 03 12:32:32 crc kubenswrapper[4820]: I0203 12:32:32.998069 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-bfcd7c7c4-s6j9b"] Feb 03 12:32:33 crc kubenswrapper[4820]: I0203 12:32:33.013489 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:32:33 crc kubenswrapper[4820]: I0203 12:32:33.031942 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e-openstack-config-secret\") pod \"openstackclient\" (UID: \"bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e\") " pod="openstack/openstackclient" Feb 03 12:32:33 crc kubenswrapper[4820]: I0203 12:32:33.544400 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:32:33 crc kubenswrapper[4820]: I0203 12:32:33.545694 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:32:33 crc kubenswrapper[4820]: I0203 12:32:33.588427 4820 generic.go:334] "Generic (PLEG): container finished" podID="c0fcb4b0-e763-4309-8097-facfd4782cfb" containerID="618cfceb67ae402aadfac828e372715348e71b2bceffa2d52373046aba9a6cca" exitCode=0 Feb 03 12:32:34 crc kubenswrapper[4820]: I0203 12:32:33.613203 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 12:32:34 crc kubenswrapper[4820]: I0203 12:32:34.085427 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6ca5fea5-c1a3-4788-adc6-c146025f00f3" podUID="bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e" Feb 03 12:32:34 crc kubenswrapper[4820]: I0203 12:32:34.438746 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14f188d1-7883-471d-9564-01f405548b98" path="/var/lib/kubelet/pods/14f188d1-7883-471d-9564-01f405548b98/volumes" Feb 03 12:32:34 crc kubenswrapper[4820]: I0203 12:32:34.439804 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:32:34 crc kubenswrapper[4820]: I0203 12:32:34.439942 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:32:34 crc kubenswrapper[4820]: I0203 12:32:34.439958 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5j76d" event={"ID":"c0fcb4b0-e763-4309-8097-facfd4782cfb","Type":"ContainerDied","Data":"618cfceb67ae402aadfac828e372715348e71b2bceffa2d52373046aba9a6cca"} Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.107944 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.123297 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5j76d" event={"ID":"c0fcb4b0-e763-4309-8097-facfd4782cfb","Type":"ContainerDied","Data":"340df6559d1d7752ca27b7f5ec95bcee8aad05e9f71ed615516d3ae7c876804e"} Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.123354 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="340df6559d1d7752ca27b7f5ec95bcee8aad05e9f71ed615516d3ae7c876804e" Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.127867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32b101cf-4d79-44f8-a591-dd5c74df5af6","Type":"ContainerStarted","Data":"c50114851701b6f15c2e3ccd17bc3d952dfc5004b557063bf2a4a2bc9bf02280"} Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.134203 4820 generic.go:334] "Generic (PLEG): container finished" podID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerID="3e5e5ed8899e8013708b2eab378ba7fdc5527de4f0a8305f9da9e2f6237a1f91" exitCode=0 Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.135173 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8ff956445-pzzpk" event={"ID":"156cf9db-e6bb-486e-b3b5-e72d4f99e684","Type":"ContainerDied","Data":"3e5e5ed8899e8013708b2eab378ba7fdc5527de4f0a8305f9da9e2f6237a1f91"} Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.145841 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.191942 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.196671 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6ca5fea5-c1a3-4788-adc6-c146025f00f3" podUID="bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e" Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.226791 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.822333 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ca5fea5-c1a3-4788-adc6-c146025f00f3-openstack-config-secret\") pod \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.822523 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkmfk\" (UniqueName: \"kubernetes.io/projected/6ca5fea5-c1a3-4788-adc6-c146025f00f3-kube-api-access-pkmfk\") pod \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.822628 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca5fea5-c1a3-4788-adc6-c146025f00f3-combined-ca-bundle\") pod \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.822933 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ca5fea5-c1a3-4788-adc6-c146025f00f3-openstack-config\") pod \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\" (UID: \"6ca5fea5-c1a3-4788-adc6-c146025f00f3\") " Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.826393 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca5fea5-c1a3-4788-adc6-c146025f00f3-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6ca5fea5-c1a3-4788-adc6-c146025f00f3" (UID: "6ca5fea5-c1a3-4788-adc6-c146025f00f3"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.864305 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.960616 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca5fea5-c1a3-4788-adc6-c146025f00f3-kube-api-access-pkmfk" (OuterVolumeSpecName: "kube-api-access-pkmfk") pod "6ca5fea5-c1a3-4788-adc6-c146025f00f3" (UID: "6ca5fea5-c1a3-4788-adc6-c146025f00f3"). InnerVolumeSpecName "kube-api-access-pkmfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.981203 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca5fea5-c1a3-4788-adc6-c146025f00f3-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6ca5fea5-c1a3-4788-adc6-c146025f00f3" (UID: "6ca5fea5-c1a3-4788-adc6-c146025f00f3"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.987160 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ca5fea5-c1a3-4788-adc6-c146025f00f3-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.987207 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ca5fea5-c1a3-4788-adc6-c146025f00f3-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.987220 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkmfk\" (UniqueName: \"kubernetes.io/projected/6ca5fea5-c1a3-4788-adc6-c146025f00f3-kube-api-access-pkmfk\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:35 crc kubenswrapper[4820]: I0203 12:32:35.991256 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca5fea5-c1a3-4788-adc6-c146025f00f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ca5fea5-c1a3-4788-adc6-c146025f00f3" (UID: "6ca5fea5-c1a3-4788-adc6-c146025f00f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.094235 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fcb4b0-e763-4309-8097-facfd4782cfb-catalog-content\") pod \"c0fcb4b0-e763-4309-8097-facfd4782cfb\" (UID: \"c0fcb4b0-e763-4309-8097-facfd4782cfb\") " Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.094586 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fcb4b0-e763-4309-8097-facfd4782cfb-utilities\") pod \"c0fcb4b0-e763-4309-8097-facfd4782cfb\" (UID: \"c0fcb4b0-e763-4309-8097-facfd4782cfb\") " Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.102637 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0fcb4b0-e763-4309-8097-facfd4782cfb-utilities" (OuterVolumeSpecName: "utilities") pod "c0fcb4b0-e763-4309-8097-facfd4782cfb" (UID: "c0fcb4b0-e763-4309-8097-facfd4782cfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.104068 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64xzl\" (UniqueName: \"kubernetes.io/projected/c0fcb4b0-e763-4309-8097-facfd4782cfb-kube-api-access-64xzl\") pod \"c0fcb4b0-e763-4309-8097-facfd4782cfb\" (UID: \"c0fcb4b0-e763-4309-8097-facfd4782cfb\") " Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.106018 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca5fea5-c1a3-4788-adc6-c146025f00f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.106047 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fcb4b0-e763-4309-8097-facfd4782cfb-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.125929 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0fcb4b0-e763-4309-8097-facfd4782cfb-kube-api-access-64xzl" (OuterVolumeSpecName: "kube-api-access-64xzl") pod "c0fcb4b0-e763-4309-8097-facfd4782cfb" (UID: "c0fcb4b0-e763-4309-8097-facfd4782cfb"). InnerVolumeSpecName "kube-api-access-64xzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.194821 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0fcb4b0-e763-4309-8097-facfd4782cfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0fcb4b0-e763-4309-8097-facfd4782cfb" (UID: "c0fcb4b0-e763-4309-8097-facfd4782cfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.211739 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64xzl\" (UniqueName: \"kubernetes.io/projected/c0fcb4b0-e763-4309-8097-facfd4782cfb-kube-api-access-64xzl\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.211786 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fcb4b0-e763-4309-8097-facfd4782cfb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.274202 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.278012 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5j76d" Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.278235 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1de311c0-cca0-4f9f-8897-6c5239e71368" containerName="cinder-scheduler" containerID="cri-o://e7548deb997d2bb6fac8ebc5eb6652761de24b5ace765b2bcfca0c2354fea601" gracePeriod=30 Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.283611 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1de311c0-cca0-4f9f-8897-6c5239e71368" containerName="probe" containerID="cri-o://4eb97649a133b19e72a5d63d7c682b7aa79ce3621229f868a2c7f02ac7512201" gracePeriod=30 Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.301762 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6ca5fea5-c1a3-4788-adc6-c146025f00f3" podUID="bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e" Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.497664 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6ca5fea5-c1a3-4788-adc6-c146025f00f3" podUID="bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e" Feb 03 12:32:36 crc kubenswrapper[4820]: I0203 12:32:36.580111 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5j76d"] Feb 03 12:32:37 crc kubenswrapper[4820]: I0203 12:32:37.529968 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:32:37 crc kubenswrapper[4820]: E0203 12:32:37.530489 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:32:37 crc kubenswrapper[4820]: I0203 12:32:37.837737 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca5fea5-c1a3-4788-adc6-c146025f00f3" path="/var/lib/kubelet/pods/6ca5fea5-c1a3-4788-adc6-c146025f00f3/volumes" Feb 03 12:32:37 crc kubenswrapper[4820]: I0203 12:32:37.838275 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5j76d"] Feb 03 12:32:37 crc kubenswrapper[4820]: I0203 12:32:37.977084 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.110190 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.267649 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-internal-tls-certs\") pod \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.267784 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-config\") pod \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.267867 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-combined-ca-bundle\") pod \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.268072 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-ovndb-tls-certs\") pod \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.268166 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-public-tls-certs\") pod \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.268202 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqz5x\" (UniqueName: \"kubernetes.io/projected/156cf9db-e6bb-486e-b3b5-e72d4f99e684-kube-api-access-rqz5x\") pod \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.268321 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-httpd-config\") pod \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\" (UID: \"156cf9db-e6bb-486e-b3b5-e72d4f99e684\") " Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.284572 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-656b464f74-h7xjt" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.297634 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/156cf9db-e6bb-486e-b3b5-e72d4f99e684-kube-api-access-rqz5x" (OuterVolumeSpecName: "kube-api-access-rqz5x") pod "156cf9db-e6bb-486e-b3b5-e72d4f99e684" (UID: "156cf9db-e6bb-486e-b3b5-e72d4f99e684"). InnerVolumeSpecName "kube-api-access-rqz5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.297780 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "156cf9db-e6bb-486e-b3b5-e72d4f99e684" (UID: "156cf9db-e6bb-486e-b3b5-e72d4f99e684"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.377843 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqz5x\" (UniqueName: \"kubernetes.io/projected/156cf9db-e6bb-486e-b3b5-e72d4f99e684-kube-api-access-rqz5x\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.378204 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.433557 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.589064 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "156cf9db-e6bb-486e-b3b5-e72d4f99e684" (UID: "156cf9db-e6bb-486e-b3b5-e72d4f99e684"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.591386 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.606079 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "156cf9db-e6bb-486e-b3b5-e72d4f99e684" (UID: "156cf9db-e6bb-486e-b3b5-e72d4f99e684"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.624200 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "156cf9db-e6bb-486e-b3b5-e72d4f99e684" (UID: "156cf9db-e6bb-486e-b3b5-e72d4f99e684"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.635215 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-config" (OuterVolumeSpecName: "config") pod "156cf9db-e6bb-486e-b3b5-e72d4f99e684" (UID: "156cf9db-e6bb-486e-b3b5-e72d4f99e684"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.654506 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "156cf9db-e6bb-486e-b3b5-e72d4f99e684" (UID: "156cf9db-e6bb-486e-b3b5-e72d4f99e684"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.696847 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.696909 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.696929 4820 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:38 crc kubenswrapper[4820]: I0203 12:32:38.696941 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/156cf9db-e6bb-486e-b3b5-e72d4f99e684-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:39 crc kubenswrapper[4820]: I0203 12:32:39.150467 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjmdv" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" probeResult="failure" output=< Feb 03 12:32:39 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:32:39 crc kubenswrapper[4820]: > Feb 03 12:32:39 crc kubenswrapper[4820]: I0203 12:32:39.212782 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0fcb4b0-e763-4309-8097-facfd4782cfb" path="/var/lib/kubelet/pods/c0fcb4b0-e763-4309-8097-facfd4782cfb/volumes" Feb 03 12:32:39 crc kubenswrapper[4820]: I0203 12:32:39.215628 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32b101cf-4d79-44f8-a591-dd5c74df5af6","Type":"ContainerStarted","Data":"4201b8fab606ac85ee0c2c427694388dd71a26805ab30c157f422cf0f2c658a9"} Feb 03 12:32:39 crc kubenswrapper[4820]: I0203 12:32:39.234634 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e","Type":"ContainerStarted","Data":"091670b0205dfa543e4dc7014745fb64dc3083147a095d33d554ab0ecbe5166b"} Feb 03 12:32:39 crc kubenswrapper[4820]: I0203 12:32:39.246728 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8ff956445-pzzpk" event={"ID":"156cf9db-e6bb-486e-b3b5-e72d4f99e684","Type":"ContainerDied","Data":"a3ee154e29360350211342ac6f930ab89dadcf3959e8c6f158c9b81400f57034"} Feb 03 12:32:39 crc kubenswrapper[4820]: I0203 12:32:39.247134 4820 scope.go:117] "RemoveContainer" containerID="07f53837417fd62c651b388f2bebfa14c05e93cfa77e736364a7637ed8644b12" Feb 03 12:32:39 crc kubenswrapper[4820]: I0203 12:32:39.247386 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8ff956445-pzzpk" Feb 03 12:32:39 crc kubenswrapper[4820]: I0203 12:32:39.280967 4820 generic.go:334] "Generic (PLEG): container finished" podID="1de311c0-cca0-4f9f-8897-6c5239e71368" containerID="4eb97649a133b19e72a5d63d7c682b7aa79ce3621229f868a2c7f02ac7512201" exitCode=0 Feb 03 12:32:39 crc kubenswrapper[4820]: I0203 12:32:39.282487 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1de311c0-cca0-4f9f-8897-6c5239e71368","Type":"ContainerDied","Data":"4eb97649a133b19e72a5d63d7c682b7aa79ce3621229f868a2c7f02ac7512201"} Feb 03 12:32:39 crc kubenswrapper[4820]: I0203 12:32:39.316686 4820 scope.go:117] "RemoveContainer" containerID="3e5e5ed8899e8013708b2eab378ba7fdc5527de4f0a8305f9da9e2f6237a1f91" Feb 03 12:32:39 crc kubenswrapper[4820]: I0203 12:32:39.328563 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8ff956445-pzzpk"] Feb 03 12:32:39 crc kubenswrapper[4820]: I0203 12:32:39.349467 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8ff956445-pzzpk"] Feb 03 12:32:40 crc kubenswrapper[4820]: I0203 12:32:40.402244 4820 generic.go:334] "Generic (PLEG): container finished" podID="1de311c0-cca0-4f9f-8897-6c5239e71368" containerID="e7548deb997d2bb6fac8ebc5eb6652761de24b5ace765b2bcfca0c2354fea601" exitCode=0 Feb 03 12:32:40 crc kubenswrapper[4820]: I0203 12:32:40.402577 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1de311c0-cca0-4f9f-8897-6c5239e71368","Type":"ContainerDied","Data":"e7548deb997d2bb6fac8ebc5eb6652761de24b5ace765b2bcfca0c2354fea601"} Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.060097 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.090043 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1de311c0-cca0-4f9f-8897-6c5239e71368-etc-machine-id\") pod \"1de311c0-cca0-4f9f-8897-6c5239e71368\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.090104 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-config-data\") pod \"1de311c0-cca0-4f9f-8897-6c5239e71368\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.090124 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-config-data-custom\") pod \"1de311c0-cca0-4f9f-8897-6c5239e71368\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.090156 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-combined-ca-bundle\") pod \"1de311c0-cca0-4f9f-8897-6c5239e71368\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.090174 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-scripts\") pod \"1de311c0-cca0-4f9f-8897-6c5239e71368\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.090200 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cvnc\" (UniqueName: \"kubernetes.io/projected/1de311c0-cca0-4f9f-8897-6c5239e71368-kube-api-access-5cvnc\") pod \"1de311c0-cca0-4f9f-8897-6c5239e71368\" (UID: \"1de311c0-cca0-4f9f-8897-6c5239e71368\") " Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.091614 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de311c0-cca0-4f9f-8897-6c5239e71368-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1de311c0-cca0-4f9f-8897-6c5239e71368" (UID: "1de311c0-cca0-4f9f-8897-6c5239e71368"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.106468 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1de311c0-cca0-4f9f-8897-6c5239e71368" (UID: "1de311c0-cca0-4f9f-8897-6c5239e71368"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.106997 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de311c0-cca0-4f9f-8897-6c5239e71368-kube-api-access-5cvnc" (OuterVolumeSpecName: "kube-api-access-5cvnc") pod "1de311c0-cca0-4f9f-8897-6c5239e71368" (UID: "1de311c0-cca0-4f9f-8897-6c5239e71368"). InnerVolumeSpecName "kube-api-access-5cvnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.107176 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-scripts" (OuterVolumeSpecName: "scripts") pod "1de311c0-cca0-4f9f-8897-6c5239e71368" (UID: "1de311c0-cca0-4f9f-8897-6c5239e71368"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.187735 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" path="/var/lib/kubelet/pods/156cf9db-e6bb-486e-b3b5-e72d4f99e684/volumes" Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.196674 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1de311c0-cca0-4f9f-8897-6c5239e71368-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.196720 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.196730 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.196744 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cvnc\" (UniqueName: \"kubernetes.io/projected/1de311c0-cca0-4f9f-8897-6c5239e71368-kube-api-access-5cvnc\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.264668 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1de311c0-cca0-4f9f-8897-6c5239e71368" (UID: "1de311c0-cca0-4f9f-8897-6c5239e71368"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.274366 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-config-data" (OuterVolumeSpecName: "config-data") pod "1de311c0-cca0-4f9f-8897-6c5239e71368" (UID: "1de311c0-cca0-4f9f-8897-6c5239e71368"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.298942 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:41 crc kubenswrapper[4820]: I0203 12:32:41.298990 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de311c0-cca0-4f9f-8897-6c5239e71368-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.404738 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1de311c0-cca0-4f9f-8897-6c5239e71368","Type":"ContainerDied","Data":"990183735abcc9b14666a8a68013dc64974d5d08a4cb2c8fe56c79e8fdba92b3"} Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.405016 4820 scope.go:117] "RemoveContainer" containerID="4eb97649a133b19e72a5d63d7c682b7aa79ce3621229f868a2c7f02ac7512201" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.405313 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.421972 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32b101cf-4d79-44f8-a591-dd5c74df5af6","Type":"ContainerStarted","Data":"4227ca3a3d0233c9c7c6b035d1dec4041469d0c81081f2bc60350563c703fa52"} Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.423043 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.424361 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.485702 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=15.48536783 podStartE2EDuration="15.48536783s" podCreationTimestamp="2026-02-03 12:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:32:42.455088882 +0000 UTC m=+1679.978164766" watchObservedRunningTime="2026-02-03 12:32:42.48536783 +0000 UTC m=+1680.008443694" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.496326 4820 scope.go:117] "RemoveContainer" containerID="e7548deb997d2bb6fac8ebc5eb6652761de24b5ace765b2bcfca0c2354fea601" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.524664 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.544645 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.593197 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:32:42 crc kubenswrapper[4820]: E0203 12:32:42.595057 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerName="neutron-httpd" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.595089 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerName="neutron-httpd" Feb 03 12:32:42 crc kubenswrapper[4820]: E0203 12:32:42.595105 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de311c0-cca0-4f9f-8897-6c5239e71368" containerName="cinder-scheduler" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.595112 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de311c0-cca0-4f9f-8897-6c5239e71368" containerName="cinder-scheduler" Feb 03 12:32:42 crc kubenswrapper[4820]: E0203 12:32:42.595150 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de311c0-cca0-4f9f-8897-6c5239e71368" containerName="probe" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.595160 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de311c0-cca0-4f9f-8897-6c5239e71368" containerName="probe" Feb 03 12:32:42 crc kubenswrapper[4820]: E0203 12:32:42.595182 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fcb4b0-e763-4309-8097-facfd4782cfb" containerName="registry-server" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.595188 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fcb4b0-e763-4309-8097-facfd4782cfb" containerName="registry-server" Feb 03 12:32:42 crc kubenswrapper[4820]: E0203 12:32:42.595202 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fcb4b0-e763-4309-8097-facfd4782cfb" containerName="extract-content" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.595208 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fcb4b0-e763-4309-8097-facfd4782cfb" containerName="extract-content" Feb 03 12:32:42 crc kubenswrapper[4820]: E0203 12:32:42.595230 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerName="neutron-api" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.595236 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerName="neutron-api" Feb 03 12:32:42 crc kubenswrapper[4820]: E0203 12:32:42.595253 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fcb4b0-e763-4309-8097-facfd4782cfb" containerName="extract-utilities" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.595259 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fcb4b0-e763-4309-8097-facfd4782cfb" containerName="extract-utilities" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.595521 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerName="neutron-api" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.595541 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de311c0-cca0-4f9f-8897-6c5239e71368" containerName="probe" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.595554 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="156cf9db-e6bb-486e-b3b5-e72d4f99e684" containerName="neutron-httpd" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.595568 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de311c0-cca0-4f9f-8897-6c5239e71368" containerName="cinder-scheduler" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.595582 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fcb4b0-e763-4309-8097-facfd4782cfb" containerName="registry-server" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.599242 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.602741 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.645094 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.761560 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de9875d-8142-41a2-80b3-74a66ef53e07-config-data\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.761687 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2de9875d-8142-41a2-80b3-74a66ef53e07-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.761720 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de9875d-8142-41a2-80b3-74a66ef53e07-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.761745 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98dxj\" (UniqueName: \"kubernetes.io/projected/2de9875d-8142-41a2-80b3-74a66ef53e07-kube-api-access-98dxj\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.761884 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2de9875d-8142-41a2-80b3-74a66ef53e07-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.761921 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de9875d-8142-41a2-80b3-74a66ef53e07-scripts\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.863652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de9875d-8142-41a2-80b3-74a66ef53e07-scripts\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.863708 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de9875d-8142-41a2-80b3-74a66ef53e07-config-data\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.863828 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2de9875d-8142-41a2-80b3-74a66ef53e07-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.863867 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de9875d-8142-41a2-80b3-74a66ef53e07-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.863915 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98dxj\" (UniqueName: \"kubernetes.io/projected/2de9875d-8142-41a2-80b3-74a66ef53e07-kube-api-access-98dxj\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.864048 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2de9875d-8142-41a2-80b3-74a66ef53e07-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.864148 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2de9875d-8142-41a2-80b3-74a66ef53e07-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.871386 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de9875d-8142-41a2-80b3-74a66ef53e07-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.874081 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de9875d-8142-41a2-80b3-74a66ef53e07-config-data\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.875197 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de9875d-8142-41a2-80b3-74a66ef53e07-scripts\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.882503 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2de9875d-8142-41a2-80b3-74a66ef53e07-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.890477 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98dxj\" (UniqueName: \"kubernetes.io/projected/2de9875d-8142-41a2-80b3-74a66ef53e07-kube-api-access-98dxj\") pod \"cinder-scheduler-0\" (UID: \"2de9875d-8142-41a2-80b3-74a66ef53e07\") " pod="openstack/cinder-scheduler-0" Feb 03 12:32:42 crc kubenswrapper[4820]: I0203 12:32:42.946735 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 12:32:43 crc kubenswrapper[4820]: I0203 12:32:43.146802 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:32:43 crc kubenswrapper[4820]: I0203 12:32:43.206416 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de311c0-cca0-4f9f-8897-6c5239e71368" path="/var/lib/kubelet/pods/1de311c0-cca0-4f9f-8897-6c5239e71368/volumes" Feb 03 12:32:43 crc kubenswrapper[4820]: I0203 12:32:43.645327 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 03 12:32:44 crc kubenswrapper[4820]: I0203 12:32:44.196271 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:32:44 crc kubenswrapper[4820]: W0203 12:32:44.223726 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2de9875d_8142_41a2_80b3_74a66ef53e07.slice/crio-69e3086e5907990fb64d8ef0ff8c88e463b0a5e84e3ffe16aaf68363786dc2e7 WatchSource:0}: Error finding container 69e3086e5907990fb64d8ef0ff8c88e463b0a5e84e3ffe16aaf68363786dc2e7: Status 404 returned error can't find the container with id 69e3086e5907990fb64d8ef0ff8c88e463b0a5e84e3ffe16aaf68363786dc2e7 Feb 03 12:32:44 crc kubenswrapper[4820]: I0203 12:32:44.972302 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2de9875d-8142-41a2-80b3-74a66ef53e07","Type":"ContainerStarted","Data":"69e3086e5907990fb64d8ef0ff8c88e463b0a5e84e3ffe16aaf68363786dc2e7"} Feb 03 12:32:45 crc kubenswrapper[4820]: I0203 12:32:45.883158 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7f9964d55c-h2clw" podUID="aef62020-c58e-4de0-b1b3-10fdd2b8dc8d" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 12:32:45 crc kubenswrapper[4820]: I0203 12:32:45.884401 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-7f9964d55c-h2clw" podUID="aef62020-c58e-4de0-b1b3-10fdd2b8dc8d" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 12:32:45 crc kubenswrapper[4820]: I0203 12:32:45.895731 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-7f9964d55c-h2clw" podUID="aef62020-c58e-4de0-b1b3-10fdd2b8dc8d" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 12:32:46 crc kubenswrapper[4820]: I0203 12:32:46.052939 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2de9875d-8142-41a2-80b3-74a66ef53e07","Type":"ContainerStarted","Data":"a62d924c2d815748ace2cdf8cfcc246f5cdf25365dcb9be2a2d8a409f59bb6cb"} Feb 03 12:32:47 crc kubenswrapper[4820]: I0203 12:32:47.895262 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" podUID="ffe7d059-602c-4fbc-bd5e-4c092cc6f3db" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.98:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:32:48 crc kubenswrapper[4820]: I0203 12:32:48.713560 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-wsjsc" podUID="b460558b-ba3e-4543-bb57-debddb0711e7" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:32:48 crc kubenswrapper[4820]: I0203 12:32:48.783723 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2de9875d-8142-41a2-80b3-74a66ef53e07","Type":"ContainerStarted","Data":"bd233ccec6e651eb196bc117d969a5588d16acf99f7fbdd13dae61a40904e996"} Feb 03 12:32:48 crc kubenswrapper[4820]: I0203 12:32:48.871354 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.871329637 podStartE2EDuration="6.871329637s" podCreationTimestamp="2026-02-03 12:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:32:48.831238054 +0000 UTC m=+1686.354313938" watchObservedRunningTime="2026-02-03 12:32:48.871329637 +0000 UTC m=+1686.394405531" Feb 03 12:32:49 crc kubenswrapper[4820]: I0203 12:32:49.259265 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:32:49 crc kubenswrapper[4820]: E0203 12:32:49.259853 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:32:49 crc kubenswrapper[4820]: I0203 12:32:49.773424 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjmdv" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" probeResult="failure" output=< Feb 03 12:32:49 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:32:49 crc kubenswrapper[4820]: > Feb 03 12:32:50 crc kubenswrapper[4820]: I0203 12:32:50.193566 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="32b101cf-4d79-44f8-a591-dd5c74df5af6" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.191:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:32:50 crc kubenswrapper[4820]: I0203 12:32:50.200866 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="32b101cf-4d79-44f8-a591-dd5c74df5af6" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.191:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:32:53 crc kubenswrapper[4820]: I0203 12:32:53.135211 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 03 12:32:53 crc kubenswrapper[4820]: I0203 12:32:53.158127 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:32:53 crc kubenswrapper[4820]: I0203 12:32:53.258709 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2de9875d-8142-41a2-80b3-74a66ef53e07" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.194:8080/\": dial tcp 10.217.0.194:8080: connect: connection refused" Feb 03 12:32:53 crc kubenswrapper[4820]: I0203 12:32:53.988678 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 03 12:32:55 crc kubenswrapper[4820]: I0203 12:32:55.218359 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="32b101cf-4d79-44f8-a591-dd5c74df5af6" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.191:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:32:55 crc kubenswrapper[4820]: I0203 12:32:55.218469 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="32b101cf-4d79-44f8-a591-dd5c74df5af6" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.191:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:32:56 crc kubenswrapper[4820]: I0203 12:32:56.792101 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d8bvg"] Feb 03 12:32:56 crc kubenswrapper[4820]: I0203 12:32:56.796017 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:32:56 crc kubenswrapper[4820]: I0203 12:32:56.816626 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d8bvg"] Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.259239 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgb9s\" (UniqueName: \"kubernetes.io/projected/7b618165-67ac-4c04-9ccd-5e36c48c7b75-kube-api-access-jgb9s\") pod \"certified-operators-d8bvg\" (UID: \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\") " pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.259314 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b618165-67ac-4c04-9ccd-5e36c48c7b75-catalog-content\") pod \"certified-operators-d8bvg\" (UID: \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\") " pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.259515 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b618165-67ac-4c04-9ccd-5e36c48c7b75-utilities\") pod \"certified-operators-d8bvg\" (UID: \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\") " pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.344367 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-646ccfdf87-kdlkr"] Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.347264 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.365410 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.365718 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.365869 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.372695 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-646ccfdf87-kdlkr"] Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.374282 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-public-tls-certs\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.374328 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-run-httpd\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.374375 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b618165-67ac-4c04-9ccd-5e36c48c7b75-utilities\") pod \"certified-operators-d8bvg\" (UID: \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\") " pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.374428 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6gtr\" (UniqueName: \"kubernetes.io/projected/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-kube-api-access-s6gtr\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.374576 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-log-httpd\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.374630 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-etc-swift\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.374785 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgb9s\" (UniqueName: \"kubernetes.io/projected/7b618165-67ac-4c04-9ccd-5e36c48c7b75-kube-api-access-jgb9s\") pod \"certified-operators-d8bvg\" (UID: \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\") " pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.374812 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-internal-tls-certs\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.374832 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b618165-67ac-4c04-9ccd-5e36c48c7b75-catalog-content\") pod \"certified-operators-d8bvg\" (UID: \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\") " pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.374873 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-combined-ca-bundle\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.374981 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-config-data\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.375546 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b618165-67ac-4c04-9ccd-5e36c48c7b75-utilities\") pod \"certified-operators-d8bvg\" (UID: \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\") " pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.377652 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b618165-67ac-4c04-9ccd-5e36c48c7b75-catalog-content\") pod \"certified-operators-d8bvg\" (UID: \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\") " pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.424248 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgb9s\" (UniqueName: \"kubernetes.io/projected/7b618165-67ac-4c04-9ccd-5e36c48c7b75-kube-api-access-jgb9s\") pod \"certified-operators-d8bvg\" (UID: \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\") " pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.431796 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.487514 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-log-httpd\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.487922 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-etc-swift\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.488083 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-internal-tls-certs\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.488131 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-combined-ca-bundle\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.488192 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-config-data\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.488222 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-log-httpd\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.488323 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-public-tls-certs\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.488374 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-run-httpd\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.488524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6gtr\" (UniqueName: \"kubernetes.io/projected/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-kube-api-access-s6gtr\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.489622 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-run-httpd\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.493666 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-internal-tls-certs\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.504614 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-config-data\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.506501 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-etc-swift\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.514194 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-combined-ca-bundle\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.517835 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-public-tls-certs\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.529036 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6gtr\" (UniqueName: \"kubernetes.io/projected/e530e04a-6fa7-4cc2-be2a-46a26eec64a5-kube-api-access-s6gtr\") pod \"swift-proxy-646ccfdf87-kdlkr\" (UID: \"e530e04a-6fa7-4cc2-be2a-46a26eec64a5\") " pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.709061 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:32:57 crc kubenswrapper[4820]: I0203 12:32:57.948818 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2de9875d-8142-41a2-80b3-74a66ef53e07" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.194:8080/\": dial tcp 10.217.0.194:8080: connect: connection refused" Feb 03 12:32:58 crc kubenswrapper[4820]: I0203 12:32:58.423436 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjmdv" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" probeResult="failure" output=< Feb 03 12:32:58 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:32:58 crc kubenswrapper[4820]: > Feb 03 12:32:59 crc kubenswrapper[4820]: I0203 12:32:59.537052 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d8bvg"] Feb 03 12:32:59 crc kubenswrapper[4820]: I0203 12:32:59.804926 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 03 12:32:59 crc kubenswrapper[4820]: I0203 12:32:59.818629 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8bvg" event={"ID":"7b618165-67ac-4c04-9ccd-5e36c48c7b75","Type":"ContainerStarted","Data":"c81647d6735fb0f6c2e632c35d5d2dd2540b742b2a9e70dffa32108578f7093c"} Feb 03 12:32:59 crc kubenswrapper[4820]: I0203 12:32:59.909595 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-646ccfdf87-kdlkr"] Feb 03 12:33:00 crc kubenswrapper[4820]: I0203 12:33:00.911475 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-646ccfdf87-kdlkr" event={"ID":"e530e04a-6fa7-4cc2-be2a-46a26eec64a5","Type":"ContainerStarted","Data":"98591d28b0eb71e2e9640e33baba53242d71386a032924950c6ccd2e45d0929b"} Feb 03 12:33:00 crc kubenswrapper[4820]: I0203 12:33:00.914503 4820 generic.go:334] "Generic (PLEG): container finished" podID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerID="b192be8b6941658dab09f8ef7e7430547416dbd8d6c44b705738f8bfe4a09bf1" exitCode=0 Feb 03 12:33:00 crc kubenswrapper[4820]: I0203 12:33:00.914538 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8bvg" event={"ID":"7b618165-67ac-4c04-9ccd-5e36c48c7b75","Type":"ContainerDied","Data":"b192be8b6941658dab09f8ef7e7430547416dbd8d6c44b705738f8bfe4a09bf1"} Feb 03 12:33:02 crc kubenswrapper[4820]: I0203 12:33:02.087756 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:33:02 crc kubenswrapper[4820]: I0203 12:33:02.088421 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="ceilometer-central-agent" containerID="cri-o://2652091cd818301487f5bc148fa2be23d295b0b2d44c8c452ce75479cea9bc01" gracePeriod=30 Feb 03 12:33:02 crc kubenswrapper[4820]: I0203 12:33:02.088959 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="proxy-httpd" containerID="cri-o://6ab1d94771e0b90907febe52bfa187eec587f14af2703385974005308a77374f" gracePeriod=30 Feb 03 12:33:02 crc kubenswrapper[4820]: I0203 12:33:02.089034 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="sg-core" containerID="cri-o://029a5e8d401927e1b2753d9697409b68811d0dbc9da28b220102c8b50000c9a6" gracePeriod=30 Feb 03 12:33:02 crc kubenswrapper[4820]: I0203 12:33:02.089071 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="ceilometer-notification-agent" containerID="cri-o://32a93833397a8ecfb3ecbac5d36f50e8f56e5141fe2e629779d43d46e9671f78" gracePeriod=30 Feb 03 12:33:02 crc kubenswrapper[4820]: I0203 12:33:02.228220 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-646ccfdf87-kdlkr" event={"ID":"e530e04a-6fa7-4cc2-be2a-46a26eec64a5","Type":"ContainerStarted","Data":"feb22a761097012cf9b59f4dc352db31a6ce50a6c37d961e88fcbee95256e83b"} Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.130513 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.131153 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.132703 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"258627a9dac8c607d158f7b60718c41b0a56b0d1a371bcf6c8e5e827f34acb59"} pod="openstack/horizon-5fdc8588b4-jtjr8" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.132794 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" containerID="cri-o://258627a9dac8c607d158f7b60718c41b0a56b0d1a371bcf6c8e5e827f34acb59" gracePeriod=30 Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.175977 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:33:03 crc kubenswrapper[4820]: E0203 12:33:03.176254 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.686851 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.687003 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.745698 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8bvg" event={"ID":"7b618165-67ac-4c04-9ccd-5e36c48c7b75","Type":"ContainerStarted","Data":"76a5d0070436fa815366c35c8fed6f14bebe1d478f6f1f7a9ead7dca3640ce85"} Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.753498 4820 generic.go:334] "Generic (PLEG): container finished" podID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerID="6ab1d94771e0b90907febe52bfa187eec587f14af2703385974005308a77374f" exitCode=0 Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.753544 4820 generic.go:334] "Generic (PLEG): container finished" podID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerID="029a5e8d401927e1b2753d9697409b68811d0dbc9da28b220102c8b50000c9a6" exitCode=2 Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.753615 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cc94de-c839-4f2b-82a4-afb000afefe4","Type":"ContainerDied","Data":"6ab1d94771e0b90907febe52bfa187eec587f14af2703385974005308a77374f"} Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.753654 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cc94de-c839-4f2b-82a4-afb000afefe4","Type":"ContainerDied","Data":"029a5e8d401927e1b2753d9697409b68811d0dbc9da28b220102c8b50000c9a6"} Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.767975 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-646ccfdf87-kdlkr" event={"ID":"e530e04a-6fa7-4cc2-be2a-46a26eec64a5","Type":"ContainerStarted","Data":"f40e872ea3a90fe7df9d4a1d73d93f70410ba7872c3064c2e5ad1e5acd94b4ac"} Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.768347 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.768397 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.786160 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"6ecd1021da966b26d0ebdc213f4c8379ce99f2bdd3ff3973594574161725d11d"} pod="openstack/horizon-68b4df5bdd-tdb9h" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.786249 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" containerID="cri-o://6ecd1021da966b26d0ebdc213f4c8379ce99f2bdd3ff3973594574161725d11d" gracePeriod=30 Feb 03 12:33:03 crc kubenswrapper[4820]: I0203 12:33:03.905452 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-646ccfdf87-kdlkr" podStartSLOduration=6.905432945 podStartE2EDuration="6.905432945s" podCreationTimestamp="2026-02-03 12:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:33:03.868542766 +0000 UTC m=+1701.391618620" watchObservedRunningTime="2026-02-03 12:33:03.905432945 +0000 UTC m=+1701.428508809" Feb 03 12:33:04 crc kubenswrapper[4820]: I0203 12:33:04.095344 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 03 12:33:05 crc kubenswrapper[4820]: I0203 12:33:05.212049 4820 generic.go:334] "Generic (PLEG): container finished" podID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerID="32a93833397a8ecfb3ecbac5d36f50e8f56e5141fe2e629779d43d46e9671f78" exitCode=0 Feb 03 12:33:05 crc kubenswrapper[4820]: I0203 12:33:05.212386 4820 generic.go:334] "Generic (PLEG): container finished" podID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerID="2652091cd818301487f5bc148fa2be23d295b0b2d44c8c452ce75479cea9bc01" exitCode=0 Feb 03 12:33:05 crc kubenswrapper[4820]: I0203 12:33:05.213454 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cc94de-c839-4f2b-82a4-afb000afefe4","Type":"ContainerDied","Data":"32a93833397a8ecfb3ecbac5d36f50e8f56e5141fe2e629779d43d46e9671f78"} Feb 03 12:33:05 crc kubenswrapper[4820]: I0203 12:33:05.213487 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cc94de-c839-4f2b-82a4-afb000afefe4","Type":"ContainerDied","Data":"2652091cd818301487f5bc148fa2be23d295b0b2d44c8c452ce75479cea9bc01"} Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.001666 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.118988 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-combined-ca-bundle\") pod \"48cc94de-c839-4f2b-82a4-afb000afefe4\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.119094 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-sg-core-conf-yaml\") pod \"48cc94de-c839-4f2b-82a4-afb000afefe4\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.119209 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cc94de-c839-4f2b-82a4-afb000afefe4-log-httpd\") pod \"48cc94de-c839-4f2b-82a4-afb000afefe4\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.119276 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqf5k\" (UniqueName: \"kubernetes.io/projected/48cc94de-c839-4f2b-82a4-afb000afefe4-kube-api-access-jqf5k\") pod \"48cc94de-c839-4f2b-82a4-afb000afefe4\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.119346 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-config-data\") pod \"48cc94de-c839-4f2b-82a4-afb000afefe4\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.119381 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-scripts\") pod \"48cc94de-c839-4f2b-82a4-afb000afefe4\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.119582 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cc94de-c839-4f2b-82a4-afb000afefe4-run-httpd\") pod \"48cc94de-c839-4f2b-82a4-afb000afefe4\" (UID: \"48cc94de-c839-4f2b-82a4-afb000afefe4\") " Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.120618 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48cc94de-c839-4f2b-82a4-afb000afefe4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "48cc94de-c839-4f2b-82a4-afb000afefe4" (UID: "48cc94de-c839-4f2b-82a4-afb000afefe4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.121676 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cc94de-c839-4f2b-82a4-afb000afefe4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.127026 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48cc94de-c839-4f2b-82a4-afb000afefe4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "48cc94de-c839-4f2b-82a4-afb000afefe4" (UID: "48cc94de-c839-4f2b-82a4-afb000afefe4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.133438 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48cc94de-c839-4f2b-82a4-afb000afefe4-kube-api-access-jqf5k" (OuterVolumeSpecName: "kube-api-access-jqf5k") pod "48cc94de-c839-4f2b-82a4-afb000afefe4" (UID: "48cc94de-c839-4f2b-82a4-afb000afefe4"). InnerVolumeSpecName "kube-api-access-jqf5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.139416 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-scripts" (OuterVolumeSpecName: "scripts") pod "48cc94de-c839-4f2b-82a4-afb000afefe4" (UID: "48cc94de-c839-4f2b-82a4-afb000afefe4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.460505 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqf5k\" (UniqueName: \"kubernetes.io/projected/48cc94de-c839-4f2b-82a4-afb000afefe4-kube-api-access-jqf5k\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.460550 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.460565 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48cc94de-c839-4f2b-82a4-afb000afefe4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.530232 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "48cc94de-c839-4f2b-82a4-afb000afefe4" (UID: "48cc94de-c839-4f2b-82a4-afb000afefe4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.541771 4820 generic.go:334] "Generic (PLEG): container finished" podID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerID="76a5d0070436fa815366c35c8fed6f14bebe1d478f6f1f7a9ead7dca3640ce85" exitCode=0 Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.541952 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8bvg" event={"ID":"7b618165-67ac-4c04-9ccd-5e36c48c7b75","Type":"ContainerDied","Data":"76a5d0070436fa815366c35c8fed6f14bebe1d478f6f1f7a9ead7dca3640ce85"} Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.581729 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.583258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48cc94de-c839-4f2b-82a4-afb000afefe4","Type":"ContainerDied","Data":"2e9a4fca0ec4a698c4fa2b9d597021a13fa3c2102fda0ad8a3295c173a84d7bb"} Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.583336 4820 scope.go:117] "RemoveContainer" containerID="6ab1d94771e0b90907febe52bfa187eec587f14af2703385974005308a77374f" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.583640 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.685781 4820 scope.go:117] "RemoveContainer" containerID="029a5e8d401927e1b2753d9697409b68811d0dbc9da28b220102c8b50000c9a6" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.718053 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-config-data" (OuterVolumeSpecName: "config-data") pod "48cc94de-c839-4f2b-82a4-afb000afefe4" (UID: "48cc94de-c839-4f2b-82a4-afb000afefe4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.726211 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48cc94de-c839-4f2b-82a4-afb000afefe4" (UID: "48cc94de-c839-4f2b-82a4-afb000afefe4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.730733 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-646ccfdf87-kdlkr" podUID="e530e04a-6fa7-4cc2-be2a-46a26eec64a5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.756179 4820 scope.go:117] "RemoveContainer" containerID="32a93833397a8ecfb3ecbac5d36f50e8f56e5141fe2e629779d43d46e9671f78" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.788410 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.788470 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48cc94de-c839-4f2b-82a4-afb000afefe4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:06 crc kubenswrapper[4820]: I0203 12:33:06.842734 4820 scope.go:117] "RemoveContainer" containerID="2652091cd818301487f5bc148fa2be23d295b0b2d44c8c452ce75479cea9bc01" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.356320 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.403093 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.458988 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:33:07 crc kubenswrapper[4820]: E0203 12:33:07.459762 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="proxy-httpd" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.459795 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="proxy-httpd" Feb 03 12:33:07 crc kubenswrapper[4820]: E0203 12:33:07.459818 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="ceilometer-central-agent" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.459826 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="ceilometer-central-agent" Feb 03 12:33:07 crc kubenswrapper[4820]: E0203 12:33:07.459846 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="sg-core" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.459857 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="sg-core" Feb 03 12:33:07 crc kubenswrapper[4820]: E0203 12:33:07.459870 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="ceilometer-notification-agent" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.459878 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="ceilometer-notification-agent" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.460316 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="ceilometer-notification-agent" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.460342 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="ceilometer-central-agent" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.460366 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="proxy-httpd" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.460385 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" containerName="sg-core" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.466991 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.473846 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.479744 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.494328 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.583548 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-config-data\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.583672 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-scripts\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.583714 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lktb2\" (UniqueName: \"kubernetes.io/projected/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-kube-api-access-lktb2\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.583780 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.583872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.583976 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-log-httpd\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.584013 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-run-httpd\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.687139 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-scripts\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.687205 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lktb2\" (UniqueName: \"kubernetes.io/projected/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-kube-api-access-lktb2\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.687253 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.687349 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.687380 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-log-httpd\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.687403 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-run-httpd\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.687474 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-config-data\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.688755 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-log-httpd\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.688815 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-run-httpd\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.695711 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-scripts\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.696710 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-config-data\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.697481 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.697952 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.712980 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lktb2\" (UniqueName: \"kubernetes.io/projected/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-kube-api-access-lktb2\") pod \"ceilometer-0\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " pod="openstack/ceilometer-0" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.741967 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-646ccfdf87-kdlkr" podUID="e530e04a-6fa7-4cc2-be2a-46a26eec64a5" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.742507 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-646ccfdf87-kdlkr" podUID="e530e04a-6fa7-4cc2-be2a-46a26eec64a5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 12:33:07 crc kubenswrapper[4820]: I0203 12:33:07.796949 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:33:09 crc kubenswrapper[4820]: I0203 12:33:09.170168 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjmdv" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" probeResult="failure" output=< Feb 03 12:33:09 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:33:09 crc kubenswrapper[4820]: > Feb 03 12:33:10 crc kubenswrapper[4820]: I0203 12:33:10.243354 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48cc94de-c839-4f2b-82a4-afb000afefe4" path="/var/lib/kubelet/pods/48cc94de-c839-4f2b-82a4-afb000afefe4/volumes" Feb 03 12:33:10 crc kubenswrapper[4820]: E0203 12:33:10.246040 4820 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.081s" Feb 03 12:33:10 crc kubenswrapper[4820]: I0203 12:33:10.246110 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:33:10 crc kubenswrapper[4820]: I0203 12:33:10.456373 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8bvg" event={"ID":"7b618165-67ac-4c04-9ccd-5e36c48c7b75","Type":"ContainerStarted","Data":"7da12677698e9a0b53785292061dc4075549e9c6c10f7643a560f36be07e6991"} Feb 03 12:33:10 crc kubenswrapper[4820]: I0203 12:33:10.501939 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d8bvg" podStartSLOduration=7.892104034 podStartE2EDuration="14.501912044s" podCreationTimestamp="2026-02-03 12:32:56 +0000 UTC" firstStartedPulling="2026-02-03 12:33:00.91903905 +0000 UTC m=+1698.442114914" lastFinishedPulling="2026-02-03 12:33:07.52884706 +0000 UTC m=+1705.051922924" observedRunningTime="2026-02-03 12:33:10.499483018 +0000 UTC m=+1708.022558882" watchObservedRunningTime="2026-02-03 12:33:10.501912044 +0000 UTC m=+1708.024987928" Feb 03 12:33:11 crc kubenswrapper[4820]: I0203 12:33:11.887936 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2aa878ef-588c-4a9e-84d2-04c3a2903bc9","Type":"ContainerStarted","Data":"f1bfc37e058e23664dcda550466eb3dfa425da60863642abd78942b4591048d0"} Feb 03 12:33:12 crc kubenswrapper[4820]: I0203 12:33:12.930879 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:33:12 crc kubenswrapper[4820]: I0203 12:33:12.936490 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-646ccfdf87-kdlkr" Feb 03 12:33:12 crc kubenswrapper[4820]: I0203 12:33:12.955102 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2aa878ef-588c-4a9e-84d2-04c3a2903bc9","Type":"ContainerStarted","Data":"567b9703b666d83d181babc0143b3b05a3af2ca86936f3ede0d8154fe52c2d06"} Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.142658 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:33:15 crc kubenswrapper[4820]: E0203 12:33:15.143551 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.235629 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-thjnl"] Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.238706 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-thjnl" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.259987 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-thjnl"] Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.260122 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/476be9fa-ea08-41c6-b804-37c313076dce-operator-scripts\") pod \"nova-api-db-create-thjnl\" (UID: \"476be9fa-ea08-41c6-b804-37c313076dce\") " pod="openstack/nova-api-db-create-thjnl" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.260418 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfq4m\" (UniqueName: \"kubernetes.io/projected/476be9fa-ea08-41c6-b804-37c313076dce-kube-api-access-nfq4m\") pod \"nova-api-db-create-thjnl\" (UID: \"476be9fa-ea08-41c6-b804-37c313076dce\") " pod="openstack/nova-api-db-create-thjnl" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.351555 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-trc87"] Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.353963 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-trc87" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.362661 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/353ec1c9-2e22-4116-b0d7-7d215237a58f-operator-scripts\") pod \"nova-cell0-db-create-trc87\" (UID: \"353ec1c9-2e22-4116-b0d7-7d215237a58f\") " pod="openstack/nova-cell0-db-create-trc87" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.362793 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/476be9fa-ea08-41c6-b804-37c313076dce-operator-scripts\") pod \"nova-api-db-create-thjnl\" (UID: \"476be9fa-ea08-41c6-b804-37c313076dce\") " pod="openstack/nova-api-db-create-thjnl" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.362868 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl9jl\" (UniqueName: \"kubernetes.io/projected/353ec1c9-2e22-4116-b0d7-7d215237a58f-kube-api-access-dl9jl\") pod \"nova-cell0-db-create-trc87\" (UID: \"353ec1c9-2e22-4116-b0d7-7d215237a58f\") " pod="openstack/nova-cell0-db-create-trc87" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.364442 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfq4m\" (UniqueName: \"kubernetes.io/projected/476be9fa-ea08-41c6-b804-37c313076dce-kube-api-access-nfq4m\") pod \"nova-api-db-create-thjnl\" (UID: \"476be9fa-ea08-41c6-b804-37c313076dce\") " pod="openstack/nova-api-db-create-thjnl" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.365670 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/476be9fa-ea08-41c6-b804-37c313076dce-operator-scripts\") pod \"nova-api-db-create-thjnl\" (UID: \"476be9fa-ea08-41c6-b804-37c313076dce\") " pod="openstack/nova-api-db-create-thjnl" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.394910 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfq4m\" (UniqueName: \"kubernetes.io/projected/476be9fa-ea08-41c6-b804-37c313076dce-kube-api-access-nfq4m\") pod \"nova-api-db-create-thjnl\" (UID: \"476be9fa-ea08-41c6-b804-37c313076dce\") " pod="openstack/nova-api-db-create-thjnl" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.397225 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-trc87"] Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.452787 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-7f1d-account-create-update-wjmlt"] Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.761606 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-thjnl" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.762540 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7f1d-account-create-update-wjmlt" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.792509 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl9jl\" (UniqueName: \"kubernetes.io/projected/353ec1c9-2e22-4116-b0d7-7d215237a58f-kube-api-access-dl9jl\") pod \"nova-cell0-db-create-trc87\" (UID: \"353ec1c9-2e22-4116-b0d7-7d215237a58f\") " pod="openstack/nova-cell0-db-create-trc87" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.793550 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38ac594b-e515-44b4-856f-b57f5f6d5049-operator-scripts\") pod \"nova-api-7f1d-account-create-update-wjmlt\" (UID: \"38ac594b-e515-44b4-856f-b57f5f6d5049\") " pod="openstack/nova-api-7f1d-account-create-update-wjmlt" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.793665 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/353ec1c9-2e22-4116-b0d7-7d215237a58f-operator-scripts\") pod \"nova-cell0-db-create-trc87\" (UID: \"353ec1c9-2e22-4116-b0d7-7d215237a58f\") " pod="openstack/nova-cell0-db-create-trc87" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.794168 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwphz\" (UniqueName: \"kubernetes.io/projected/38ac594b-e515-44b4-856f-b57f5f6d5049-kube-api-access-kwphz\") pod \"nova-api-7f1d-account-create-update-wjmlt\" (UID: \"38ac594b-e515-44b4-856f-b57f5f6d5049\") " pod="openstack/nova-api-7f1d-account-create-update-wjmlt" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.799767 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.809088 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/353ec1c9-2e22-4116-b0d7-7d215237a58f-operator-scripts\") pod \"nova-cell0-db-create-trc87\" (UID: \"353ec1c9-2e22-4116-b0d7-7d215237a58f\") " pod="openstack/nova-cell0-db-create-trc87" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.883670 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7f9964d55c-h2clw" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.920992 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7f1d-account-create-update-wjmlt"] Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.923960 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwphz\" (UniqueName: \"kubernetes.io/projected/38ac594b-e515-44b4-856f-b57f5f6d5049-kube-api-access-kwphz\") pod \"nova-api-7f1d-account-create-update-wjmlt\" (UID: \"38ac594b-e515-44b4-856f-b57f5f6d5049\") " pod="openstack/nova-api-7f1d-account-create-update-wjmlt" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.926805 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38ac594b-e515-44b4-856f-b57f5f6d5049-operator-scripts\") pod \"nova-api-7f1d-account-create-update-wjmlt\" (UID: \"38ac594b-e515-44b4-856f-b57f5f6d5049\") " pod="openstack/nova-api-7f1d-account-create-update-wjmlt" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.926937 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38ac594b-e515-44b4-856f-b57f5f6d5049-operator-scripts\") pod \"nova-api-7f1d-account-create-update-wjmlt\" (UID: \"38ac594b-e515-44b4-856f-b57f5f6d5049\") " pod="openstack/nova-api-7f1d-account-create-update-wjmlt" Feb 03 12:33:15 crc kubenswrapper[4820]: I0203 12:33:15.928071 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl9jl\" (UniqueName: \"kubernetes.io/projected/353ec1c9-2e22-4116-b0d7-7d215237a58f-kube-api-access-dl9jl\") pod \"nova-cell0-db-create-trc87\" (UID: \"353ec1c9-2e22-4116-b0d7-7d215237a58f\") " pod="openstack/nova-cell0-db-create-trc87" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.013405 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwphz\" (UniqueName: \"kubernetes.io/projected/38ac594b-e515-44b4-856f-b57f5f6d5049-kube-api-access-kwphz\") pod \"nova-api-7f1d-account-create-update-wjmlt\" (UID: \"38ac594b-e515-44b4-856f-b57f5f6d5049\") " pod="openstack/nova-api-7f1d-account-create-update-wjmlt" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.088087 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-trc87" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.103670 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-knmtw"] Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.105574 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-knmtw" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.134401 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7f1d-account-create-update-wjmlt" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.159953 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-knmtw"] Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.211545 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86c8ddbf74-xsj66"] Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.212141 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86c8ddbf74-xsj66" podUID="a09e4336-af9d-4231-b744-1373af8ddfba" containerName="neutron-api" containerID="cri-o://5730a5f27ede87ff81d58383f5a5c3644ae00f7510ba2b6e5765006eddf383e2" gracePeriod=30 Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.212843 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86c8ddbf74-xsj66" podUID="a09e4336-af9d-4231-b744-1373af8ddfba" containerName="neutron-httpd" containerID="cri-o://8584dcb6e35d2654738d30b489990524fc48d4394b4f6fa96273c3d2fe56ca13" gracePeriod=30 Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.237868 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc4z4\" (UniqueName: \"kubernetes.io/projected/e43cf7d4-e153-434c-a76e-96e2cc27316e-kube-api-access-tc4z4\") pod \"nova-cell1-db-create-knmtw\" (UID: \"e43cf7d4-e153-434c-a76e-96e2cc27316e\") " pod="openstack/nova-cell1-db-create-knmtw" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.237994 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e43cf7d4-e153-434c-a76e-96e2cc27316e-operator-scripts\") pod \"nova-cell1-db-create-knmtw\" (UID: \"e43cf7d4-e153-434c-a76e-96e2cc27316e\") " pod="openstack/nova-cell1-db-create-knmtw" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.310993 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-6c6a-account-create-update-h45fh"] Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.320289 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6c6a-account-create-update-h45fh" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.329620 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.348234 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6c6a-account-create-update-h45fh"] Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.364005 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34f8614d-0d83-4dc9-80cb-12e0d2672b13-operator-scripts\") pod \"nova-cell0-6c6a-account-create-update-h45fh\" (UID: \"34f8614d-0d83-4dc9-80cb-12e0d2672b13\") " pod="openstack/nova-cell0-6c6a-account-create-update-h45fh" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.364087 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc4z4\" (UniqueName: \"kubernetes.io/projected/e43cf7d4-e153-434c-a76e-96e2cc27316e-kube-api-access-tc4z4\") pod \"nova-cell1-db-create-knmtw\" (UID: \"e43cf7d4-e153-434c-a76e-96e2cc27316e\") " pod="openstack/nova-cell1-db-create-knmtw" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.364130 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdgw2\" (UniqueName: \"kubernetes.io/projected/34f8614d-0d83-4dc9-80cb-12e0d2672b13-kube-api-access-qdgw2\") pod \"nova-cell0-6c6a-account-create-update-h45fh\" (UID: \"34f8614d-0d83-4dc9-80cb-12e0d2672b13\") " pod="openstack/nova-cell0-6c6a-account-create-update-h45fh" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.364183 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e43cf7d4-e153-434c-a76e-96e2cc27316e-operator-scripts\") pod \"nova-cell1-db-create-knmtw\" (UID: \"e43cf7d4-e153-434c-a76e-96e2cc27316e\") " pod="openstack/nova-cell1-db-create-knmtw" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.365241 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e43cf7d4-e153-434c-a76e-96e2cc27316e-operator-scripts\") pod \"nova-cell1-db-create-knmtw\" (UID: \"e43cf7d4-e153-434c-a76e-96e2cc27316e\") " pod="openstack/nova-cell1-db-create-knmtw" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.427796 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc4z4\" (UniqueName: \"kubernetes.io/projected/e43cf7d4-e153-434c-a76e-96e2cc27316e-kube-api-access-tc4z4\") pod \"nova-cell1-db-create-knmtw\" (UID: \"e43cf7d4-e153-434c-a76e-96e2cc27316e\") " pod="openstack/nova-cell1-db-create-knmtw" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.470400 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34f8614d-0d83-4dc9-80cb-12e0d2672b13-operator-scripts\") pod \"nova-cell0-6c6a-account-create-update-h45fh\" (UID: \"34f8614d-0d83-4dc9-80cb-12e0d2672b13\") " pod="openstack/nova-cell0-6c6a-account-create-update-h45fh" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.470786 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdgw2\" (UniqueName: \"kubernetes.io/projected/34f8614d-0d83-4dc9-80cb-12e0d2672b13-kube-api-access-qdgw2\") pod \"nova-cell0-6c6a-account-create-update-h45fh\" (UID: \"34f8614d-0d83-4dc9-80cb-12e0d2672b13\") " pod="openstack/nova-cell0-6c6a-account-create-update-h45fh" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.473496 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34f8614d-0d83-4dc9-80cb-12e0d2672b13-operator-scripts\") pod \"nova-cell0-6c6a-account-create-update-h45fh\" (UID: \"34f8614d-0d83-4dc9-80cb-12e0d2672b13\") " pod="openstack/nova-cell0-6c6a-account-create-update-h45fh" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.499023 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-knmtw" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.541246 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ad72-account-create-update-6ngd9"] Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.543401 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.543642 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdgw2\" (UniqueName: \"kubernetes.io/projected/34f8614d-0d83-4dc9-80cb-12e0d2672b13-kube-api-access-qdgw2\") pod \"nova-cell0-6c6a-account-create-update-h45fh\" (UID: \"34f8614d-0d83-4dc9-80cb-12e0d2672b13\") " pod="openstack/nova-cell0-6c6a-account-create-update-h45fh" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.547202 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ad72-account-create-update-6ngd9"] Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.561173 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.575469 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gc9j\" (UniqueName: \"kubernetes.io/projected/b8d524a9-aabb-4d3a-a443-e4de8a5ababc-kube-api-access-7gc9j\") pod \"nova-cell1-ad72-account-create-update-6ngd9\" (UID: \"b8d524a9-aabb-4d3a-a443-e4de8a5ababc\") " pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.575615 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d524a9-aabb-4d3a-a443-e4de8a5ababc-operator-scripts\") pod \"nova-cell1-ad72-account-create-update-6ngd9\" (UID: \"b8d524a9-aabb-4d3a-a443-e4de8a5ababc\") " pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.677618 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d524a9-aabb-4d3a-a443-e4de8a5ababc-operator-scripts\") pod \"nova-cell1-ad72-account-create-update-6ngd9\" (UID: \"b8d524a9-aabb-4d3a-a443-e4de8a5ababc\") " pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.677810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gc9j\" (UniqueName: \"kubernetes.io/projected/b8d524a9-aabb-4d3a-a443-e4de8a5ababc-kube-api-access-7gc9j\") pod \"nova-cell1-ad72-account-create-update-6ngd9\" (UID: \"b8d524a9-aabb-4d3a-a443-e4de8a5ababc\") " pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.679215 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d524a9-aabb-4d3a-a443-e4de8a5ababc-operator-scripts\") pod \"nova-cell1-ad72-account-create-update-6ngd9\" (UID: \"b8d524a9-aabb-4d3a-a443-e4de8a5ababc\") " pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.713585 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6c6a-account-create-update-h45fh" Feb 03 12:33:16 crc kubenswrapper[4820]: I0203 12:33:16.715189 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gc9j\" (UniqueName: \"kubernetes.io/projected/b8d524a9-aabb-4d3a-a443-e4de8a5ababc-kube-api-access-7gc9j\") pod \"nova-cell1-ad72-account-create-update-6ngd9\" (UID: \"b8d524a9-aabb-4d3a-a443-e4de8a5ababc\") " pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" Feb 03 12:33:19 crc kubenswrapper[4820]: I0203 12:33:19.208677 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" Feb 03 12:33:19 crc kubenswrapper[4820]: I0203 12:33:19.339691 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" podUID="ffe7d059-602c-4fbc-bd5e-4c092cc6f3db" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.98:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:33:19 crc kubenswrapper[4820]: I0203 12:33:19.354606 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" podUID="ffe7d059-602c-4fbc-bd5e-4c092cc6f3db" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.98:8081/healthz\": dial tcp 10.217.0.98:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 03 12:33:19 crc kubenswrapper[4820]: I0203 12:33:19.423096 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:33:19 crc kubenswrapper[4820]: I0203 12:33:19.430824 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:33:19 crc kubenswrapper[4820]: I0203 12:33:19.436474 4820 generic.go:334] "Generic (PLEG): container finished" podID="a09e4336-af9d-4231-b744-1373af8ddfba" containerID="8584dcb6e35d2654738d30b489990524fc48d4394b4f6fa96273c3d2fe56ca13" exitCode=0 Feb 03 12:33:19 crc kubenswrapper[4820]: I0203 12:33:19.433877 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-thjnl"] Feb 03 12:33:19 crc kubenswrapper[4820]: I0203 12:33:19.466121 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c8ddbf74-xsj66" event={"ID":"a09e4336-af9d-4231-b744-1373af8ddfba","Type":"ContainerDied","Data":"8584dcb6e35d2654738d30b489990524fc48d4394b4f6fa96273c3d2fe56ca13"} Feb 03 12:33:19 crc kubenswrapper[4820]: I0203 12:33:19.511243 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2aa878ef-588c-4a9e-84d2-04c3a2903bc9","Type":"ContainerStarted","Data":"3acf3498d008d26d2951bf985a71f485ba7f82273768a0e0d3e9010ebee225ff"} Feb 03 12:33:20 crc kubenswrapper[4820]: I0203 12:33:20.152046 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-trc87"] Feb 03 12:33:20 crc kubenswrapper[4820]: I0203 12:33:20.174796 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7f1d-account-create-update-wjmlt"] Feb 03 12:33:20 crc kubenswrapper[4820]: I0203 12:33:20.548283 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-d8bvg" podUID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerName="registry-server" probeResult="failure" output=< Feb 03 12:33:20 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:33:20 crc kubenswrapper[4820]: > Feb 03 12:33:20 crc kubenswrapper[4820]: I0203 12:33:20.611549 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ad72-account-create-update-6ngd9"] Feb 03 12:33:20 crc kubenswrapper[4820]: I0203 12:33:20.632829 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6c6a-account-create-update-h45fh"] Feb 03 12:33:20 crc kubenswrapper[4820]: I0203 12:33:20.639946 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7f1d-account-create-update-wjmlt" event={"ID":"38ac594b-e515-44b4-856f-b57f5f6d5049","Type":"ContainerStarted","Data":"16a44e8e6cddf60d718e4b5adb6d325e4a11abd08ca71120d7b2aa110624fd2d"} Feb 03 12:33:20 crc kubenswrapper[4820]: I0203 12:33:20.642651 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-thjnl" event={"ID":"476be9fa-ea08-41c6-b804-37c313076dce","Type":"ContainerStarted","Data":"4db4c47b5e1a98eeb2a1357ae0b033af27b825e3a5e5b58cc00e04ba7c418fc8"} Feb 03 12:33:20 crc kubenswrapper[4820]: I0203 12:33:20.649588 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-trc87" event={"ID":"353ec1c9-2e22-4116-b0d7-7d215237a58f","Type":"ContainerStarted","Data":"f90e57600b9e0a333c9984a19275979df3a79f9be3fcba122671c41fa907e223"} Feb 03 12:33:20 crc kubenswrapper[4820]: I0203 12:33:20.652101 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-knmtw"] Feb 03 12:33:20 crc kubenswrapper[4820]: I0203 12:33:20.714168 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjmdv" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" probeResult="failure" output=< Feb 03 12:33:20 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:33:20 crc kubenswrapper[4820]: > Feb 03 12:33:21 crc kubenswrapper[4820]: I0203 12:33:21.732963 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7f1d-account-create-update-wjmlt" event={"ID":"38ac594b-e515-44b4-856f-b57f5f6d5049","Type":"ContainerStarted","Data":"66aa0e67796be02e8d215b4c5293be1484e4dc2d58ee0866551f95ee84156d46"} Feb 03 12:33:21 crc kubenswrapper[4820]: I0203 12:33:21.753120 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6c6a-account-create-update-h45fh" event={"ID":"34f8614d-0d83-4dc9-80cb-12e0d2672b13","Type":"ContainerStarted","Data":"9bbb365c08cc1541d6c05979cc051a2a2a10262cfa641eeec5df032fb46f4bc9"} Feb 03 12:33:21 crc kubenswrapper[4820]: I0203 12:33:21.759165 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" event={"ID":"b8d524a9-aabb-4d3a-a443-e4de8a5ababc","Type":"ContainerStarted","Data":"abb3e67edf9dd2511cbfbcf4eec3863719ded8f6ccd0fdcf9b2594f26179cef9"} Feb 03 12:33:21 crc kubenswrapper[4820]: I0203 12:33:21.761258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-knmtw" event={"ID":"e43cf7d4-e153-434c-a76e-96e2cc27316e","Type":"ContainerStarted","Data":"0bf7fd09cbcc8ba964b08d375e1b92bdc054cafe48dbbfd8d79c87c32b0d2c0b"} Feb 03 12:33:21 crc kubenswrapper[4820]: I0203 12:33:21.778027 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-7f1d-account-create-update-wjmlt" podStartSLOduration=6.777999644 podStartE2EDuration="6.777999644s" podCreationTimestamp="2026-02-03 12:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:33:21.772594148 +0000 UTC m=+1719.295670022" watchObservedRunningTime="2026-02-03 12:33:21.777999644 +0000 UTC m=+1719.301075518" Feb 03 12:33:21 crc kubenswrapper[4820]: I0203 12:33:21.781296 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2aa878ef-588c-4a9e-84d2-04c3a2903bc9","Type":"ContainerStarted","Data":"b91410c86ab3eee895dff0e3b0ae0fef3ca9577c31567933c6df50f984ee81a6"} Feb 03 12:33:21 crc kubenswrapper[4820]: I0203 12:33:21.810379 4820 generic.go:334] "Generic (PLEG): container finished" podID="476be9fa-ea08-41c6-b804-37c313076dce" containerID="982e73684ac883c8859e9043aa66f3a24513cc4cac4c16f08d6b209bc5be7713" exitCode=0 Feb 03 12:33:21 crc kubenswrapper[4820]: I0203 12:33:21.810478 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-thjnl" event={"ID":"476be9fa-ea08-41c6-b804-37c313076dce","Type":"ContainerDied","Data":"982e73684ac883c8859e9043aa66f3a24513cc4cac4c16f08d6b209bc5be7713"} Feb 03 12:33:21 crc kubenswrapper[4820]: I0203 12:33:21.822263 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-knmtw" podStartSLOduration=6.822236813 podStartE2EDuration="6.822236813s" podCreationTimestamp="2026-02-03 12:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:33:21.815416868 +0000 UTC m=+1719.338492732" watchObservedRunningTime="2026-02-03 12:33:21.822236813 +0000 UTC m=+1719.345312687" Feb 03 12:33:21 crc kubenswrapper[4820]: I0203 12:33:21.862436 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-trc87" event={"ID":"353ec1c9-2e22-4116-b0d7-7d215237a58f","Type":"ContainerStarted","Data":"48992d5877b5656608d8f408a9e84644b6e3478f3e022688db1fd710cf9340e3"} Feb 03 12:33:21 crc kubenswrapper[4820]: I0203 12:33:21.923472 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-trc87" podStartSLOduration=6.923449365 podStartE2EDuration="6.923449365s" podCreationTimestamp="2026-02-03 12:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:33:21.90478576 +0000 UTC m=+1719.427861614" watchObservedRunningTime="2026-02-03 12:33:21.923449365 +0000 UTC m=+1719.446525229" Feb 03 12:33:22 crc kubenswrapper[4820]: I0203 12:33:22.883850 4820 generic.go:334] "Generic (PLEG): container finished" podID="38ac594b-e515-44b4-856f-b57f5f6d5049" containerID="66aa0e67796be02e8d215b4c5293be1484e4dc2d58ee0866551f95ee84156d46" exitCode=0 Feb 03 12:33:22 crc kubenswrapper[4820]: I0203 12:33:22.883986 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7f1d-account-create-update-wjmlt" event={"ID":"38ac594b-e515-44b4-856f-b57f5f6d5049","Type":"ContainerDied","Data":"66aa0e67796be02e8d215b4c5293be1484e4dc2d58ee0866551f95ee84156d46"} Feb 03 12:33:22 crc kubenswrapper[4820]: I0203 12:33:22.891583 4820 generic.go:334] "Generic (PLEG): container finished" podID="34f8614d-0d83-4dc9-80cb-12e0d2672b13" containerID="dc0725989584c1def9cf3a5c11d2f816571c066143ababc3fe22badb7117401e" exitCode=0 Feb 03 12:33:22 crc kubenswrapper[4820]: I0203 12:33:22.891718 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6c6a-account-create-update-h45fh" event={"ID":"34f8614d-0d83-4dc9-80cb-12e0d2672b13","Type":"ContainerDied","Data":"dc0725989584c1def9cf3a5c11d2f816571c066143ababc3fe22badb7117401e"} Feb 03 12:33:22 crc kubenswrapper[4820]: I0203 12:33:22.897299 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" event={"ID":"b8d524a9-aabb-4d3a-a443-e4de8a5ababc","Type":"ContainerStarted","Data":"6be7235e46990b33d87d4a223f1f2835db0e767cc5efd18fdf5e16c86908905c"} Feb 03 12:33:22 crc kubenswrapper[4820]: I0203 12:33:22.908556 4820 generic.go:334] "Generic (PLEG): container finished" podID="e43cf7d4-e153-434c-a76e-96e2cc27316e" containerID="ac4dedfd5518233f049989a6c43d31fa77608ad03f38d183d70a2195be319f0c" exitCode=0 Feb 03 12:33:22 crc kubenswrapper[4820]: I0203 12:33:22.908670 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-knmtw" event={"ID":"e43cf7d4-e153-434c-a76e-96e2cc27316e","Type":"ContainerDied","Data":"ac4dedfd5518233f049989a6c43d31fa77608ad03f38d183d70a2195be319f0c"} Feb 03 12:33:22 crc kubenswrapper[4820]: I0203 12:33:22.914259 4820 generic.go:334] "Generic (PLEG): container finished" podID="353ec1c9-2e22-4116-b0d7-7d215237a58f" containerID="48992d5877b5656608d8f408a9e84644b6e3478f3e022688db1fd710cf9340e3" exitCode=0 Feb 03 12:33:22 crc kubenswrapper[4820]: I0203 12:33:22.914415 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-trc87" event={"ID":"353ec1c9-2e22-4116-b0d7-7d215237a58f","Type":"ContainerDied","Data":"48992d5877b5656608d8f408a9e84644b6e3478f3e022688db1fd710cf9340e3"} Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.055954 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" podStartSLOduration=7.055926339 podStartE2EDuration="7.055926339s" podCreationTimestamp="2026-02-03 12:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:33:22.969265381 +0000 UTC m=+1720.492341265" watchObservedRunningTime="2026-02-03 12:33:23.055926339 +0000 UTC m=+1720.579002203" Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.709555 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-thjnl" Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.866130 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfq4m\" (UniqueName: \"kubernetes.io/projected/476be9fa-ea08-41c6-b804-37c313076dce-kube-api-access-nfq4m\") pod \"476be9fa-ea08-41c6-b804-37c313076dce\" (UID: \"476be9fa-ea08-41c6-b804-37c313076dce\") " Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.866263 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/476be9fa-ea08-41c6-b804-37c313076dce-operator-scripts\") pod \"476be9fa-ea08-41c6-b804-37c313076dce\" (UID: \"476be9fa-ea08-41c6-b804-37c313076dce\") " Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.869958 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/476be9fa-ea08-41c6-b804-37c313076dce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "476be9fa-ea08-41c6-b804-37c313076dce" (UID: "476be9fa-ea08-41c6-b804-37c313076dce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.886349 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476be9fa-ea08-41c6-b804-37c313076dce-kube-api-access-nfq4m" (OuterVolumeSpecName: "kube-api-access-nfq4m") pod "476be9fa-ea08-41c6-b804-37c313076dce" (UID: "476be9fa-ea08-41c6-b804-37c313076dce"). InnerVolumeSpecName "kube-api-access-nfq4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.941701 4820 generic.go:334] "Generic (PLEG): container finished" podID="b8d524a9-aabb-4d3a-a443-e4de8a5ababc" containerID="6be7235e46990b33d87d4a223f1f2835db0e767cc5efd18fdf5e16c86908905c" exitCode=0 Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.941845 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" event={"ID":"b8d524a9-aabb-4d3a-a443-e4de8a5ababc","Type":"ContainerDied","Data":"6be7235e46990b33d87d4a223f1f2835db0e767cc5efd18fdf5e16c86908905c"} Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.947059 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2aa878ef-588c-4a9e-84d2-04c3a2903bc9","Type":"ContainerStarted","Data":"90b9aa58bafbba2cba2474839adfca38905d9857392c71b96330daafcb54ecfd"} Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.948358 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.963982 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-thjnl" Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.965320 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-thjnl" event={"ID":"476be9fa-ea08-41c6-b804-37c313076dce","Type":"ContainerDied","Data":"4db4c47b5e1a98eeb2a1357ae0b033af27b825e3a5e5b58cc00e04ba7c418fc8"} Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.965527 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4db4c47b5e1a98eeb2a1357ae0b033af27b825e3a5e5b58cc00e04ba7c418fc8" Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.977113 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfq4m\" (UniqueName: \"kubernetes.io/projected/476be9fa-ea08-41c6-b804-37c313076dce-kube-api-access-nfq4m\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:23 crc kubenswrapper[4820]: I0203 12:33:23.977154 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/476be9fa-ea08-41c6-b804-37c313076dce-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:24 crc kubenswrapper[4820]: I0203 12:33:24.037765 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.312273661 podStartE2EDuration="17.037732851s" podCreationTimestamp="2026-02-03 12:33:07 +0000 UTC" firstStartedPulling="2026-02-03 12:33:10.498945594 +0000 UTC m=+1708.022021458" lastFinishedPulling="2026-02-03 12:33:23.224404784 +0000 UTC m=+1720.747480648" observedRunningTime="2026-02-03 12:33:24.012343193 +0000 UTC m=+1721.535419067" watchObservedRunningTime="2026-02-03 12:33:24.037732851 +0000 UTC m=+1721.560808725" Feb 03 12:33:24 crc kubenswrapper[4820]: E0203 12:33:24.894027 4820 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Feb 03 12:33:24 crc kubenswrapper[4820]: I0203 12:33:24.910282 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-trc87" Feb 03 12:33:24 crc kubenswrapper[4820]: I0203 12:33:24.990327 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/353ec1c9-2e22-4116-b0d7-7d215237a58f-operator-scripts\") pod \"353ec1c9-2e22-4116-b0d7-7d215237a58f\" (UID: \"353ec1c9-2e22-4116-b0d7-7d215237a58f\") " Feb 03 12:33:24 crc kubenswrapper[4820]: I0203 12:33:24.990904 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl9jl\" (UniqueName: \"kubernetes.io/projected/353ec1c9-2e22-4116-b0d7-7d215237a58f-kube-api-access-dl9jl\") pod \"353ec1c9-2e22-4116-b0d7-7d215237a58f\" (UID: \"353ec1c9-2e22-4116-b0d7-7d215237a58f\") " Feb 03 12:33:24 crc kubenswrapper[4820]: I0203 12:33:24.997223 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/353ec1c9-2e22-4116-b0d7-7d215237a58f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "353ec1c9-2e22-4116-b0d7-7d215237a58f" (UID: "353ec1c9-2e22-4116-b0d7-7d215237a58f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.023555 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/353ec1c9-2e22-4116-b0d7-7d215237a58f-kube-api-access-dl9jl" (OuterVolumeSpecName: "kube-api-access-dl9jl") pod "353ec1c9-2e22-4116-b0d7-7d215237a58f" (UID: "353ec1c9-2e22-4116-b0d7-7d215237a58f"). InnerVolumeSpecName "kube-api-access-dl9jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.068339 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-trc87" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.068744 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-trc87" event={"ID":"353ec1c9-2e22-4116-b0d7-7d215237a58f","Type":"ContainerDied","Data":"f90e57600b9e0a333c9984a19275979df3a79f9be3fcba122671c41fa907e223"} Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.068786 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f90e57600b9e0a333c9984a19275979df3a79f9be3fcba122671c41fa907e223" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.096690 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl9jl\" (UniqueName: \"kubernetes.io/projected/353ec1c9-2e22-4116-b0d7-7d215237a58f-kube-api-access-dl9jl\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.096743 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/353ec1c9-2e22-4116-b0d7-7d215237a58f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.587275 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7f1d-account-create-update-wjmlt" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.601363 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-knmtw" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.608591 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6c6a-account-create-update-h45fh" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.781689 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc4z4\" (UniqueName: \"kubernetes.io/projected/e43cf7d4-e153-434c-a76e-96e2cc27316e-kube-api-access-tc4z4\") pod \"e43cf7d4-e153-434c-a76e-96e2cc27316e\" (UID: \"e43cf7d4-e153-434c-a76e-96e2cc27316e\") " Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.782340 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e43cf7d4-e153-434c-a76e-96e2cc27316e-operator-scripts\") pod \"e43cf7d4-e153-434c-a76e-96e2cc27316e\" (UID: \"e43cf7d4-e153-434c-a76e-96e2cc27316e\") " Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.782397 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwphz\" (UniqueName: \"kubernetes.io/projected/38ac594b-e515-44b4-856f-b57f5f6d5049-kube-api-access-kwphz\") pod \"38ac594b-e515-44b4-856f-b57f5f6d5049\" (UID: \"38ac594b-e515-44b4-856f-b57f5f6d5049\") " Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.782443 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38ac594b-e515-44b4-856f-b57f5f6d5049-operator-scripts\") pod \"38ac594b-e515-44b4-856f-b57f5f6d5049\" (UID: \"38ac594b-e515-44b4-856f-b57f5f6d5049\") " Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.782534 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34f8614d-0d83-4dc9-80cb-12e0d2672b13-operator-scripts\") pod \"34f8614d-0d83-4dc9-80cb-12e0d2672b13\" (UID: \"34f8614d-0d83-4dc9-80cb-12e0d2672b13\") " Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.782595 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdgw2\" (UniqueName: \"kubernetes.io/projected/34f8614d-0d83-4dc9-80cb-12e0d2672b13-kube-api-access-qdgw2\") pod \"34f8614d-0d83-4dc9-80cb-12e0d2672b13\" (UID: \"34f8614d-0d83-4dc9-80cb-12e0d2672b13\") " Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.789273 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38ac594b-e515-44b4-856f-b57f5f6d5049-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38ac594b-e515-44b4-856f-b57f5f6d5049" (UID: "38ac594b-e515-44b4-856f-b57f5f6d5049"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.791351 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e43cf7d4-e153-434c-a76e-96e2cc27316e-kube-api-access-tc4z4" (OuterVolumeSpecName: "kube-api-access-tc4z4") pod "e43cf7d4-e153-434c-a76e-96e2cc27316e" (UID: "e43cf7d4-e153-434c-a76e-96e2cc27316e"). InnerVolumeSpecName "kube-api-access-tc4z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.791972 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e43cf7d4-e153-434c-a76e-96e2cc27316e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e43cf7d4-e153-434c-a76e-96e2cc27316e" (UID: "e43cf7d4-e153-434c-a76e-96e2cc27316e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.793880 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34f8614d-0d83-4dc9-80cb-12e0d2672b13-kube-api-access-qdgw2" (OuterVolumeSpecName: "kube-api-access-qdgw2") pod "34f8614d-0d83-4dc9-80cb-12e0d2672b13" (UID: "34f8614d-0d83-4dc9-80cb-12e0d2672b13"). InnerVolumeSpecName "kube-api-access-qdgw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.794120 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34f8614d-0d83-4dc9-80cb-12e0d2672b13-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34f8614d-0d83-4dc9-80cb-12e0d2672b13" (UID: "34f8614d-0d83-4dc9-80cb-12e0d2672b13"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.809748 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38ac594b-e515-44b4-856f-b57f5f6d5049-kube-api-access-kwphz" (OuterVolumeSpecName: "kube-api-access-kwphz") pod "38ac594b-e515-44b4-856f-b57f5f6d5049" (UID: "38ac594b-e515-44b4-856f-b57f5f6d5049"). InnerVolumeSpecName "kube-api-access-kwphz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.896019 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e43cf7d4-e153-434c-a76e-96e2cc27316e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.896089 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwphz\" (UniqueName: \"kubernetes.io/projected/38ac594b-e515-44b4-856f-b57f5f6d5049-kube-api-access-kwphz\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.896107 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38ac594b-e515-44b4-856f-b57f5f6d5049-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.896123 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34f8614d-0d83-4dc9-80cb-12e0d2672b13-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.896138 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdgw2\" (UniqueName: \"kubernetes.io/projected/34f8614d-0d83-4dc9-80cb-12e0d2672b13-kube-api-access-qdgw2\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:25 crc kubenswrapper[4820]: I0203 12:33:25.896151 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc4z4\" (UniqueName: \"kubernetes.io/projected/e43cf7d4-e153-434c-a76e-96e2cc27316e-kube-api-access-tc4z4\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.309036 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-knmtw" event={"ID":"e43cf7d4-e153-434c-a76e-96e2cc27316e","Type":"ContainerDied","Data":"0bf7fd09cbcc8ba964b08d375e1b92bdc054cafe48dbbfd8d79c87c32b0d2c0b"} Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.309101 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bf7fd09cbcc8ba964b08d375e1b92bdc054cafe48dbbfd8d79c87c32b0d2c0b" Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.309214 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-knmtw" Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.327401 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7f1d-account-create-update-wjmlt" event={"ID":"38ac594b-e515-44b4-856f-b57f5f6d5049","Type":"ContainerDied","Data":"16a44e8e6cddf60d718e4b5adb6d325e4a11abd08ca71120d7b2aa110624fd2d"} Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.327463 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16a44e8e6cddf60d718e4b5adb6d325e4a11abd08ca71120d7b2aa110624fd2d" Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.327559 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7f1d-account-create-update-wjmlt" Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.332518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6c6a-account-create-update-h45fh" event={"ID":"34f8614d-0d83-4dc9-80cb-12e0d2672b13","Type":"ContainerDied","Data":"9bbb365c08cc1541d6c05979cc051a2a2a10262cfa641eeec5df032fb46f4bc9"} Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.332591 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bbb365c08cc1541d6c05979cc051a2a2a10262cfa641eeec5df032fb46f4bc9" Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.332710 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6c6a-account-create-update-h45fh" Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.458362 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.513035 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gc9j\" (UniqueName: \"kubernetes.io/projected/b8d524a9-aabb-4d3a-a443-e4de8a5ababc-kube-api-access-7gc9j\") pod \"b8d524a9-aabb-4d3a-a443-e4de8a5ababc\" (UID: \"b8d524a9-aabb-4d3a-a443-e4de8a5ababc\") " Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.513161 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d524a9-aabb-4d3a-a443-e4de8a5ababc-operator-scripts\") pod \"b8d524a9-aabb-4d3a-a443-e4de8a5ababc\" (UID: \"b8d524a9-aabb-4d3a-a443-e4de8a5ababc\") " Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.514077 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8d524a9-aabb-4d3a-a443-e4de8a5ababc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8d524a9-aabb-4d3a-a443-e4de8a5ababc" (UID: "b8d524a9-aabb-4d3a-a443-e4de8a5ababc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.520222 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d524a9-aabb-4d3a-a443-e4de8a5ababc-kube-api-access-7gc9j" (OuterVolumeSpecName: "kube-api-access-7gc9j") pod "b8d524a9-aabb-4d3a-a443-e4de8a5ababc" (UID: "b8d524a9-aabb-4d3a-a443-e4de8a5ababc"). InnerVolumeSpecName "kube-api-access-7gc9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.616962 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gc9j\" (UniqueName: \"kubernetes.io/projected/b8d524a9-aabb-4d3a-a443-e4de8a5ababc-kube-api-access-7gc9j\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:26 crc kubenswrapper[4820]: I0203 12:33:26.617304 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d524a9-aabb-4d3a-a443-e4de8a5ababc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:27 crc kubenswrapper[4820]: I0203 12:33:27.381336 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" event={"ID":"b8d524a9-aabb-4d3a-a443-e4de8a5ababc","Type":"ContainerDied","Data":"abb3e67edf9dd2511cbfbcf4eec3863719ded8f6ccd0fdcf9b2594f26179cef9"} Feb 03 12:33:27 crc kubenswrapper[4820]: I0203 12:33:27.381436 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abb3e67edf9dd2511cbfbcf4eec3863719ded8f6ccd0fdcf9b2594f26179cef9" Feb 03 12:33:27 crc kubenswrapper[4820]: I0203 12:33:27.381583 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ad72-account-create-update-6ngd9" Feb 03 12:33:28 crc kubenswrapper[4820]: I0203 12:33:28.571549 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjmdv" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" probeResult="failure" output=< Feb 03 12:33:28 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:33:28 crc kubenswrapper[4820]: > Feb 03 12:33:28 crc kubenswrapper[4820]: I0203 12:33:28.572068 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:33:28 crc kubenswrapper[4820]: I0203 12:33:28.573673 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"b3683503afb588121c1584cbdd117a260ae2352c9eaf4c990911d9c4f1fc17cf"} pod="openshift-marketplace/redhat-operators-qjmdv" containerMessage="Container registry-server failed startup probe, will be restarted" Feb 03 12:33:28 crc kubenswrapper[4820]: I0203 12:33:28.573719 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qjmdv" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" containerID="cri-o://b3683503afb588121c1584cbdd117a260ae2352c9eaf4c990911d9c4f1fc17cf" gracePeriod=30 Feb 03 12:33:28 crc kubenswrapper[4820]: I0203 12:33:28.960592 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-d8bvg" podUID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerName="registry-server" probeResult="failure" output=< Feb 03 12:33:28 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:33:28 crc kubenswrapper[4820]: > Feb 03 12:33:29 crc kubenswrapper[4820]: I0203 12:33:29.545959 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:33:29 crc kubenswrapper[4820]: E0203 12:33:29.546252 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:33:31 crc kubenswrapper[4820]: I0203 12:33:31.964544 4820 generic.go:334] "Generic (PLEG): container finished" podID="a09e4336-af9d-4231-b744-1373af8ddfba" containerID="5730a5f27ede87ff81d58383f5a5c3644ae00f7510ba2b6e5765006eddf383e2" exitCode=0 Feb 03 12:33:31 crc kubenswrapper[4820]: I0203 12:33:31.965381 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c8ddbf74-xsj66" event={"ID":"a09e4336-af9d-4231-b744-1373af8ddfba","Type":"ContainerDied","Data":"5730a5f27ede87ff81d58383f5a5c3644ae00f7510ba2b6e5765006eddf383e2"} Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.117250 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-46nxk"] Feb 03 12:33:33 crc kubenswrapper[4820]: E0203 12:33:33.118278 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353ec1c9-2e22-4116-b0d7-7d215237a58f" containerName="mariadb-database-create" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.118296 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="353ec1c9-2e22-4116-b0d7-7d215237a58f" containerName="mariadb-database-create" Feb 03 12:33:33 crc kubenswrapper[4820]: E0203 12:33:33.118316 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38ac594b-e515-44b4-856f-b57f5f6d5049" containerName="mariadb-account-create-update" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.118324 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="38ac594b-e515-44b4-856f-b57f5f6d5049" containerName="mariadb-account-create-update" Feb 03 12:33:33 crc kubenswrapper[4820]: E0203 12:33:33.118343 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f8614d-0d83-4dc9-80cb-12e0d2672b13" containerName="mariadb-account-create-update" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.118350 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f8614d-0d83-4dc9-80cb-12e0d2672b13" containerName="mariadb-account-create-update" Feb 03 12:33:33 crc kubenswrapper[4820]: E0203 12:33:33.118367 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43cf7d4-e153-434c-a76e-96e2cc27316e" containerName="mariadb-database-create" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.118373 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43cf7d4-e153-434c-a76e-96e2cc27316e" containerName="mariadb-database-create" Feb 03 12:33:33 crc kubenswrapper[4820]: E0203 12:33:33.118392 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476be9fa-ea08-41c6-b804-37c313076dce" containerName="mariadb-database-create" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.118398 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="476be9fa-ea08-41c6-b804-37c313076dce" containerName="mariadb-database-create" Feb 03 12:33:33 crc kubenswrapper[4820]: E0203 12:33:33.118412 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d524a9-aabb-4d3a-a443-e4de8a5ababc" containerName="mariadb-account-create-update" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.118418 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d524a9-aabb-4d3a-a443-e4de8a5ababc" containerName="mariadb-account-create-update" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.118925 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f8614d-0d83-4dc9-80cb-12e0d2672b13" containerName="mariadb-account-create-update" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.119044 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8d524a9-aabb-4d3a-a443-e4de8a5ababc" containerName="mariadb-account-create-update" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.119059 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="38ac594b-e515-44b4-856f-b57f5f6d5049" containerName="mariadb-account-create-update" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.119069 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e43cf7d4-e153-434c-a76e-96e2cc27316e" containerName="mariadb-database-create" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.119101 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="476be9fa-ea08-41c6-b804-37c313076dce" containerName="mariadb-database-create" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.119117 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="353ec1c9-2e22-4116-b0d7-7d215237a58f" containerName="mariadb-database-create" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.120542 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.126566 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.126840 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.127036 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-j9sk2" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.137056 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-46nxk"] Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.181936 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvcqc\" (UniqueName: \"kubernetes.io/projected/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-kube-api-access-qvcqc\") pod \"nova-cell0-conductor-db-sync-46nxk\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.182102 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-scripts\") pod \"nova-cell0-conductor-db-sync-46nxk\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.182164 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-46nxk\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.182195 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-config-data\") pod \"nova-cell0-conductor-db-sync-46nxk\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.284355 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvcqc\" (UniqueName: \"kubernetes.io/projected/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-kube-api-access-qvcqc\") pod \"nova-cell0-conductor-db-sync-46nxk\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.284607 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-scripts\") pod \"nova-cell0-conductor-db-sync-46nxk\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.284707 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-46nxk\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.284749 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-config-data\") pod \"nova-cell0-conductor-db-sync-46nxk\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.300824 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-46nxk\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.304766 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-config-data\") pod \"nova-cell0-conductor-db-sync-46nxk\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.318032 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-scripts\") pod \"nova-cell0-conductor-db-sync-46nxk\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.330362 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvcqc\" (UniqueName: \"kubernetes.io/projected/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-kube-api-access-qvcqc\") pod \"nova-cell0-conductor-db-sync-46nxk\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:33:33 crc kubenswrapper[4820]: I0203 12:33:33.488618 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:33:34 crc kubenswrapper[4820]: I0203 12:33:34.114594 4820 generic.go:334] "Generic (PLEG): container finished" podID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerID="258627a9dac8c607d158f7b60718c41b0a56b0d1a371bcf6c8e5e827f34acb59" exitCode=137 Feb 03 12:33:34 crc kubenswrapper[4820]: I0203 12:33:34.114952 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdc8588b4-jtjr8" event={"ID":"17c371f7-f032-4444-8d4b-1183a224c7b0","Type":"ContainerDied","Data":"258627a9dac8c607d158f7b60718c41b0a56b0d1a371bcf6c8e5e827f34acb59"} Feb 03 12:33:34 crc kubenswrapper[4820]: I0203 12:33:34.115080 4820 scope.go:117] "RemoveContainer" containerID="c22975ba3ab084f0050a4631f4a0020a8a20e782596e29e66b6e36290ae66cee" Feb 03 12:33:34 crc kubenswrapper[4820]: I0203 12:33:34.145827 4820 generic.go:334] "Generic (PLEG): container finished" podID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerID="6ecd1021da966b26d0ebdc213f4c8379ce99f2bdd3ff3973594574161725d11d" exitCode=137 Feb 03 12:33:34 crc kubenswrapper[4820]: I0203 12:33:34.145929 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b4df5bdd-tdb9h" event={"ID":"308562dd-6078-4c1c-a4e0-c01a60a2d81d","Type":"ContainerDied","Data":"6ecd1021da966b26d0ebdc213f4c8379ce99f2bdd3ff3973594574161725d11d"} Feb 03 12:33:35 crc kubenswrapper[4820]: I0203 12:33:35.683321 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:33:35 crc kubenswrapper[4820]: I0203 12:33:35.683958 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="ceilometer-central-agent" containerID="cri-o://567b9703b666d83d181babc0143b3b05a3af2ca86936f3ede0d8154fe52c2d06" gracePeriod=30 Feb 03 12:33:35 crc kubenswrapper[4820]: I0203 12:33:35.684015 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="sg-core" containerID="cri-o://b91410c86ab3eee895dff0e3b0ae0fef3ca9577c31567933c6df50f984ee81a6" gracePeriod=30 Feb 03 12:33:35 crc kubenswrapper[4820]: I0203 12:33:35.684032 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="ceilometer-notification-agent" containerID="cri-o://3acf3498d008d26d2951bf985a71f485ba7f82273768a0e0d3e9010ebee225ff" gracePeriod=30 Feb 03 12:33:35 crc kubenswrapper[4820]: I0203 12:33:35.684050 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="proxy-httpd" containerID="cri-o://90b9aa58bafbba2cba2474839adfca38905d9857392c71b96330daafcb54ecfd" gracePeriod=30 Feb 03 12:33:35 crc kubenswrapper[4820]: I0203 12:33:35.703095 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.197:3000/\": read tcp 10.217.0.2:59072->10.217.0.197:3000: read: connection reset by peer" Feb 03 12:33:36 crc kubenswrapper[4820]: I0203 12:33:36.192257 4820 generic.go:334] "Generic (PLEG): container finished" podID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerID="90b9aa58bafbba2cba2474839adfca38905d9857392c71b96330daafcb54ecfd" exitCode=0 Feb 03 12:33:36 crc kubenswrapper[4820]: I0203 12:33:36.192352 4820 generic.go:334] "Generic (PLEG): container finished" podID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerID="b91410c86ab3eee895dff0e3b0ae0fef3ca9577c31567933c6df50f984ee81a6" exitCode=2 Feb 03 12:33:36 crc kubenswrapper[4820]: I0203 12:33:36.192370 4820 generic.go:334] "Generic (PLEG): container finished" podID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerID="3acf3498d008d26d2951bf985a71f485ba7f82273768a0e0d3e9010ebee225ff" exitCode=0 Feb 03 12:33:36 crc kubenswrapper[4820]: I0203 12:33:36.192400 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2aa878ef-588c-4a9e-84d2-04c3a2903bc9","Type":"ContainerDied","Data":"90b9aa58bafbba2cba2474839adfca38905d9857392c71b96330daafcb54ecfd"} Feb 03 12:33:36 crc kubenswrapper[4820]: I0203 12:33:36.192447 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2aa878ef-588c-4a9e-84d2-04c3a2903bc9","Type":"ContainerDied","Data":"b91410c86ab3eee895dff0e3b0ae0fef3ca9577c31567933c6df50f984ee81a6"} Feb 03 12:33:36 crc kubenswrapper[4820]: I0203 12:33:36.192465 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2aa878ef-588c-4a9e-84d2-04c3a2903bc9","Type":"ContainerDied","Data":"3acf3498d008d26d2951bf985a71f485ba7f82273768a0e0d3e9010ebee225ff"} Feb 03 12:33:37 crc kubenswrapper[4820]: I0203 12:33:37.221930 4820 generic.go:334] "Generic (PLEG): container finished" podID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerID="567b9703b666d83d181babc0143b3b05a3af2ca86936f3ede0d8154fe52c2d06" exitCode=0 Feb 03 12:33:37 crc kubenswrapper[4820]: I0203 12:33:37.221981 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2aa878ef-588c-4a9e-84d2-04c3a2903bc9","Type":"ContainerDied","Data":"567b9703b666d83d181babc0143b3b05a3af2ca86936f3ede0d8154fe52c2d06"} Feb 03 12:33:37 crc kubenswrapper[4820]: I0203 12:33:37.797747 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.197:3000/\": dial tcp 10.217.0.197:3000: connect: connection refused" Feb 03 12:33:38 crc kubenswrapper[4820]: I0203 12:33:38.495501 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-d8bvg" podUID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerName="registry-server" probeResult="failure" output=< Feb 03 12:33:38 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:33:38 crc kubenswrapper[4820]: > Feb 03 12:33:39 crc kubenswrapper[4820]: E0203 12:33:39.951229 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 03 12:33:39 crc kubenswrapper[4820]: E0203 12:33:39.951965 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf9hc9h648h5bch558h99h684h57bh648hb6h577h676h689hb6hcdh8fh6h54hbdh8fh9bh676h645h55h647hc4h567h646h85h55hb5h687q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmp6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:33:39 crc kubenswrapper[4820]: E0203 12:33:39.953083 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e" Feb 03 12:33:40 crc kubenswrapper[4820]: I0203 12:33:40.763874 4820 scope.go:117] "RemoveContainer" containerID="a17b9aafc2fe0ed01eea3ac2324d99cc7383038c9d824a7384a0dcac0217d20f" Feb 03 12:33:40 crc kubenswrapper[4820]: E0203 12:33:40.763913 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e" Feb 03 12:33:40 crc kubenswrapper[4820]: I0203 12:33:40.809125 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:33:40 crc kubenswrapper[4820]: I0203 12:33:40.834601 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkf2r\" (UniqueName: \"kubernetes.io/projected/a09e4336-af9d-4231-b744-1373af8ddfba-kube-api-access-zkf2r\") pod \"a09e4336-af9d-4231-b744-1373af8ddfba\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " Feb 03 12:33:40 crc kubenswrapper[4820]: I0203 12:33:40.834759 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-config\") pod \"a09e4336-af9d-4231-b744-1373af8ddfba\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " Feb 03 12:33:40 crc kubenswrapper[4820]: I0203 12:33:40.836111 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-httpd-config\") pod \"a09e4336-af9d-4231-b744-1373af8ddfba\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " Feb 03 12:33:40 crc kubenswrapper[4820]: I0203 12:33:40.836193 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-ovndb-tls-certs\") pod \"a09e4336-af9d-4231-b744-1373af8ddfba\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " Feb 03 12:33:40 crc kubenswrapper[4820]: I0203 12:33:40.836405 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-combined-ca-bundle\") pod \"a09e4336-af9d-4231-b744-1373af8ddfba\" (UID: \"a09e4336-af9d-4231-b744-1373af8ddfba\") " Feb 03 12:33:40 crc kubenswrapper[4820]: I0203 12:33:40.850726 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a09e4336-af9d-4231-b744-1373af8ddfba-kube-api-access-zkf2r" (OuterVolumeSpecName: "kube-api-access-zkf2r") pod "a09e4336-af9d-4231-b744-1373af8ddfba" (UID: "a09e4336-af9d-4231-b744-1373af8ddfba"). InnerVolumeSpecName "kube-api-access-zkf2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:33:40 crc kubenswrapper[4820]: I0203 12:33:40.892055 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a09e4336-af9d-4231-b744-1373af8ddfba" (UID: "a09e4336-af9d-4231-b744-1373af8ddfba"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:40 crc kubenswrapper[4820]: I0203 12:33:40.953878 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkf2r\" (UniqueName: \"kubernetes.io/projected/a09e4336-af9d-4231-b744-1373af8ddfba-kube-api-access-zkf2r\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:40 crc kubenswrapper[4820]: I0203 12:33:40.953938 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:40 crc kubenswrapper[4820]: I0203 12:33:40.963049 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a09e4336-af9d-4231-b744-1373af8ddfba" (UID: "a09e4336-af9d-4231-b744-1373af8ddfba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:40 crc kubenswrapper[4820]: I0203 12:33:40.985906 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-config" (OuterVolumeSpecName: "config") pod "a09e4336-af9d-4231-b744-1373af8ddfba" (UID: "a09e4336-af9d-4231-b744-1373af8ddfba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.054983 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a09e4336-af9d-4231-b744-1373af8ddfba" (UID: "a09e4336-af9d-4231-b744-1373af8ddfba"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.057294 4820 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.057322 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.057334 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a09e4336-af9d-4231-b744-1373af8ddfba-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.249701 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-46nxk"] Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.277096 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.320601 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2aa878ef-588c-4a9e-84d2-04c3a2903bc9","Type":"ContainerDied","Data":"f1bfc37e058e23664dcda550466eb3dfa425da60863642abd78942b4591048d0"} Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.320700 4820 scope.go:117] "RemoveContainer" containerID="90b9aa58bafbba2cba2474839adfca38905d9857392c71b96330daafcb54ecfd" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.321012 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.324987 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-46nxk" event={"ID":"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc","Type":"ContainerStarted","Data":"180dd92d13c209289ae9079eb0b41396bc65ec0c12a7f6303fdc45ead4756b67"} Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.351049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c8ddbf74-xsj66" event={"ID":"a09e4336-af9d-4231-b744-1373af8ddfba","Type":"ContainerDied","Data":"69fa5e1d7ca61f93515e493577c9b40364231ce7157c94ffb4e22f8f09cc0248"} Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.351482 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86c8ddbf74-xsj66" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.358241 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdc8588b4-jtjr8" event={"ID":"17c371f7-f032-4444-8d4b-1183a224c7b0","Type":"ContainerStarted","Data":"0f246c95365efface890ae91ed527bfb457eda5dc7b27fa8cda294f51a37e248"} Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.364823 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lktb2\" (UniqueName: \"kubernetes.io/projected/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-kube-api-access-lktb2\") pod \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.366190 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-combined-ca-bundle\") pod \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.366407 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-sg-core-conf-yaml\") pod \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.366736 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-config-data\") pod \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.366989 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-run-httpd\") pod \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.367350 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-log-httpd\") pod \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.367557 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-scripts\") pod \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\" (UID: \"2aa878ef-588c-4a9e-84d2-04c3a2903bc9\") " Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.370602 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2aa878ef-588c-4a9e-84d2-04c3a2903bc9" (UID: "2aa878ef-588c-4a9e-84d2-04c3a2903bc9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.372563 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-kube-api-access-lktb2" (OuterVolumeSpecName: "kube-api-access-lktb2") pod "2aa878ef-588c-4a9e-84d2-04c3a2903bc9" (UID: "2aa878ef-588c-4a9e-84d2-04c3a2903bc9"). InnerVolumeSpecName "kube-api-access-lktb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.372995 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b4df5bdd-tdb9h" event={"ID":"308562dd-6078-4c1c-a4e0-c01a60a2d81d","Type":"ContainerStarted","Data":"9c7a577e87b3e83c7e349bf9ccd38e1f5613ee686a7353fa8aac276143a6016b"} Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.374119 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2aa878ef-588c-4a9e-84d2-04c3a2903bc9" (UID: "2aa878ef-588c-4a9e-84d2-04c3a2903bc9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.376086 4820 scope.go:117] "RemoveContainer" containerID="b91410c86ab3eee895dff0e3b0ae0fef3ca9577c31567933c6df50f984ee81a6" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.379879 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-scripts" (OuterVolumeSpecName: "scripts") pod "2aa878ef-588c-4a9e-84d2-04c3a2903bc9" (UID: "2aa878ef-588c-4a9e-84d2-04c3a2903bc9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.434381 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2aa878ef-588c-4a9e-84d2-04c3a2903bc9" (UID: "2aa878ef-588c-4a9e-84d2-04c3a2903bc9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.434468 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86c8ddbf74-xsj66"] Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.461004 4820 scope.go:117] "RemoveContainer" containerID="3acf3498d008d26d2951bf985a71f485ba7f82273768a0e0d3e9010ebee225ff" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.461214 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-86c8ddbf74-xsj66"] Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.472073 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lktb2\" (UniqueName: \"kubernetes.io/projected/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-kube-api-access-lktb2\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.472107 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.472118 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.472129 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.472138 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.503578 4820 scope.go:117] "RemoveContainer" containerID="567b9703b666d83d181babc0143b3b05a3af2ca86936f3ede0d8154fe52c2d06" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.519149 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2aa878ef-588c-4a9e-84d2-04c3a2903bc9" (UID: "2aa878ef-588c-4a9e-84d2-04c3a2903bc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.562179 4820 scope.go:117] "RemoveContainer" containerID="8584dcb6e35d2654738d30b489990524fc48d4394b4f6fa96273c3d2fe56ca13" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.576234 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.585913 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-config-data" (OuterVolumeSpecName: "config-data") pod "2aa878ef-588c-4a9e-84d2-04c3a2903bc9" (UID: "2aa878ef-588c-4a9e-84d2-04c3a2903bc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.630332 4820 scope.go:117] "RemoveContainer" containerID="5730a5f27ede87ff81d58383f5a5c3644ae00f7510ba2b6e5765006eddf383e2" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.712684 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa878ef-588c-4a9e-84d2-04c3a2903bc9-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.743739 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.763173 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.777293 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:33:41 crc kubenswrapper[4820]: E0203 12:33:41.777987 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09e4336-af9d-4231-b744-1373af8ddfba" containerName="neutron-api" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.778015 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09e4336-af9d-4231-b744-1373af8ddfba" containerName="neutron-api" Feb 03 12:33:41 crc kubenswrapper[4820]: E0203 12:33:41.778036 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="sg-core" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.778044 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="sg-core" Feb 03 12:33:41 crc kubenswrapper[4820]: E0203 12:33:41.778058 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09e4336-af9d-4231-b744-1373af8ddfba" containerName="neutron-httpd" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.778067 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09e4336-af9d-4231-b744-1373af8ddfba" containerName="neutron-httpd" Feb 03 12:33:41 crc kubenswrapper[4820]: E0203 12:33:41.778082 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="ceilometer-central-agent" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.778090 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="ceilometer-central-agent" Feb 03 12:33:41 crc kubenswrapper[4820]: E0203 12:33:41.778114 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="proxy-httpd" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.778121 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="proxy-httpd" Feb 03 12:33:41 crc kubenswrapper[4820]: E0203 12:33:41.778153 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="ceilometer-notification-agent" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.778159 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="ceilometer-notification-agent" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.778384 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="sg-core" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.778403 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="ceilometer-central-agent" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.778414 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09e4336-af9d-4231-b744-1373af8ddfba" containerName="neutron-api" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.778431 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="ceilometer-notification-agent" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.778466 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" containerName="proxy-httpd" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.778485 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09e4336-af9d-4231-b744-1373af8ddfba" containerName="neutron-httpd" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.780879 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.786441 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.787232 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.797456 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.917398 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.917556 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc2385eb-3720-486c-a1e6-de8d39b81012-run-httpd\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.917769 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-config-data\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.917943 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-scripts\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.918012 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.918298 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpnsv\" (UniqueName: \"kubernetes.io/projected/dc2385eb-3720-486c-a1e6-de8d39b81012-kube-api-access-bpnsv\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:41 crc kubenswrapper[4820]: I0203 12:33:41.918473 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc2385eb-3720-486c-a1e6-de8d39b81012-log-httpd\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.024869 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpnsv\" (UniqueName: \"kubernetes.io/projected/dc2385eb-3720-486c-a1e6-de8d39b81012-kube-api-access-bpnsv\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.025033 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc2385eb-3720-486c-a1e6-de8d39b81012-log-httpd\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.025096 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.025117 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc2385eb-3720-486c-a1e6-de8d39b81012-run-httpd\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.025236 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-config-data\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.025353 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-scripts\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.025375 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.033110 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-config-data\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.033474 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc2385eb-3720-486c-a1e6-de8d39b81012-run-httpd\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.035656 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc2385eb-3720-486c-a1e6-de8d39b81012-log-httpd\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.037234 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.040528 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.044075 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-scripts\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.050212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpnsv\" (UniqueName: \"kubernetes.io/projected/dc2385eb-3720-486c-a1e6-de8d39b81012-kube-api-access-bpnsv\") pod \"ceilometer-0\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.102362 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:33:42 crc kubenswrapper[4820]: I0203 12:33:42.676303 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:33:43 crc kubenswrapper[4820]: I0203 12:33:43.128093 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:33:43 crc kubenswrapper[4820]: I0203 12:33:43.128447 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:33:43 crc kubenswrapper[4820]: I0203 12:33:43.166572 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa878ef-588c-4a9e-84d2-04c3a2903bc9" path="/var/lib/kubelet/pods/2aa878ef-588c-4a9e-84d2-04c3a2903bc9/volumes" Feb 03 12:33:43 crc kubenswrapper[4820]: I0203 12:33:43.167593 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a09e4336-af9d-4231-b744-1373af8ddfba" path="/var/lib/kubelet/pods/a09e4336-af9d-4231-b744-1373af8ddfba/volumes" Feb 03 12:33:43 crc kubenswrapper[4820]: I0203 12:33:43.425036 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc2385eb-3720-486c-a1e6-de8d39b81012","Type":"ContainerStarted","Data":"543ad3c5118b840710e5847e2c42093bab4352d4b4ddb7aa346cbea824a5ee9f"} Feb 03 12:33:43 crc kubenswrapper[4820]: I0203 12:33:43.621341 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:33:43 crc kubenswrapper[4820]: I0203 12:33:43.622160 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:33:44 crc kubenswrapper[4820]: I0203 12:33:44.143313 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:33:44 crc kubenswrapper[4820]: E0203 12:33:44.143836 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:33:44 crc kubenswrapper[4820]: I0203 12:33:44.452357 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc2385eb-3720-486c-a1e6-de8d39b81012","Type":"ContainerStarted","Data":"ebce3229a465f44af1a54bb582fc434d596fd8f0c376b0bba7fccd7dea94fc98"} Feb 03 12:33:45 crc kubenswrapper[4820]: I0203 12:33:45.464013 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc2385eb-3720-486c-a1e6-de8d39b81012","Type":"ContainerStarted","Data":"f949cbac1cb513adc0dfabe66243b81171539e8a232158bf45721e44acbbb55a"} Feb 03 12:33:48 crc kubenswrapper[4820]: I0203 12:33:48.495763 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-d8bvg" podUID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerName="registry-server" probeResult="failure" output=< Feb 03 12:33:48 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:33:48 crc kubenswrapper[4820]: > Feb 03 12:33:53 crc kubenswrapper[4820]: I0203 12:33:53.129684 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:33:53 crc kubenswrapper[4820]: I0203 12:33:53.625410 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 03 12:33:54 crc kubenswrapper[4820]: I0203 12:33:54.125745 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:33:54 crc kubenswrapper[4820]: I0203 12:33:54.836429 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 03 12:33:54 crc kubenswrapper[4820]: I0203 12:33:54.837068 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="cd46da3e-bb82-4990-8d29-03f53c601f36" containerName="watcher-decision-engine" containerID="cri-o://76b337ba9889fcd2d04101da32a8144cfa4c5201045390fad76182170e8b3ad0" gracePeriod=30 Feb 03 12:33:55 crc kubenswrapper[4820]: E0203 12:33:55.702026 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Feb 03 12:33:55 crc kubenswrapper[4820]: E0203 12:33:55.702492 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvcqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-46nxk_openstack(d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:33:55 crc kubenswrapper[4820]: E0203 12:33:55.703661 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-46nxk" podUID="d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc" Feb 03 12:33:56 crc kubenswrapper[4820]: I0203 12:33:56.143514 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:33:56 crc kubenswrapper[4820]: E0203 12:33:56.159186 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:33:56 crc kubenswrapper[4820]: I0203 12:33:56.994019 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc2385eb-3720-486c-a1e6-de8d39b81012","Type":"ContainerStarted","Data":"25fe8caa2ebf0b88444055d54c8b1bbf17afd4176480ee015377919752186d34"} Feb 03 12:33:57 crc kubenswrapper[4820]: E0203 12:33:57.031599 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-46nxk" podUID="d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc" Feb 03 12:33:57 crc kubenswrapper[4820]: I0203 12:33:57.707321 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:33:57 crc kubenswrapper[4820]: I0203 12:33:57.800147 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:33:58 crc kubenswrapper[4820]: I0203 12:33:58.011125 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e","Type":"ContainerStarted","Data":"002b51021b87739d6c17947b125340aa33dbefcb5e43f238473f0485521b90e9"} Feb 03 12:33:58 crc kubenswrapper[4820]: I0203 12:33:58.041286 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=9.478126334 podStartE2EDuration="1m28.041263012s" podCreationTimestamp="2026-02-03 12:32:30 +0000 UTC" firstStartedPulling="2026-02-03 12:32:38.47298953 +0000 UTC m=+1675.996065394" lastFinishedPulling="2026-02-03 12:33:57.036126208 +0000 UTC m=+1754.559202072" observedRunningTime="2026-02-03 12:33:58.036313357 +0000 UTC m=+1755.559389221" watchObservedRunningTime="2026-02-03 12:33:58.041263012 +0000 UTC m=+1755.564338876" Feb 03 12:33:58 crc kubenswrapper[4820]: I0203 12:33:58.621130 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d8bvg"] Feb 03 12:33:59 crc kubenswrapper[4820]: I0203 12:33:59.175965 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qjmdv_ad9b0bbe-7f17-4347-bda3-5f0a843b3997/registry-server/0.log" Feb 03 12:33:59 crc kubenswrapper[4820]: I0203 12:33:59.176633 4820 generic.go:334] "Generic (PLEG): container finished" podID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerID="b3683503afb588121c1584cbdd117a260ae2352c9eaf4c990911d9c4f1fc17cf" exitCode=137 Feb 03 12:33:59 crc kubenswrapper[4820]: I0203 12:33:59.176839 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d8bvg" podUID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerName="registry-server" containerID="cri-o://7da12677698e9a0b53785292061dc4075549e9c6c10f7643a560f36be07e6991" gracePeriod=2 Feb 03 12:33:59 crc kubenswrapper[4820]: I0203 12:33:59.181823 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjmdv" event={"ID":"ad9b0bbe-7f17-4347-bda3-5f0a843b3997","Type":"ContainerDied","Data":"b3683503afb588121c1584cbdd117a260ae2352c9eaf4c990911d9c4f1fc17cf"} Feb 03 12:34:00 crc kubenswrapper[4820]: I0203 12:34:00.201227 4820 generic.go:334] "Generic (PLEG): container finished" podID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerID="7da12677698e9a0b53785292061dc4075549e9c6c10f7643a560f36be07e6991" exitCode=0 Feb 03 12:34:00 crc kubenswrapper[4820]: I0203 12:34:00.201820 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8bvg" event={"ID":"7b618165-67ac-4c04-9ccd-5e36c48c7b75","Type":"ContainerDied","Data":"7da12677698e9a0b53785292061dc4075549e9c6c10f7643a560f36be07e6991"} Feb 03 12:34:00 crc kubenswrapper[4820]: I0203 12:34:00.201856 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8bvg" event={"ID":"7b618165-67ac-4c04-9ccd-5e36c48c7b75","Type":"ContainerDied","Data":"c81647d6735fb0f6c2e632c35d5d2dd2540b742b2a9e70dffa32108578f7093c"} Feb 03 12:34:00 crc kubenswrapper[4820]: I0203 12:34:00.201871 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c81647d6735fb0f6c2e632c35d5d2dd2540b742b2a9e70dffa32108578f7093c" Feb 03 12:34:00 crc kubenswrapper[4820]: I0203 12:34:00.322451 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:34:00 crc kubenswrapper[4820]: I0203 12:34:00.581837 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b618165-67ac-4c04-9ccd-5e36c48c7b75-utilities\") pod \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\" (UID: \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\") " Feb 03 12:34:00 crc kubenswrapper[4820]: I0203 12:34:00.582553 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b618165-67ac-4c04-9ccd-5e36c48c7b75-catalog-content\") pod \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\" (UID: \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\") " Feb 03 12:34:00 crc kubenswrapper[4820]: I0203 12:34:00.583308 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgb9s\" (UniqueName: \"kubernetes.io/projected/7b618165-67ac-4c04-9ccd-5e36c48c7b75-kube-api-access-jgb9s\") pod \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\" (UID: \"7b618165-67ac-4c04-9ccd-5e36c48c7b75\") " Feb 03 12:34:00 crc kubenswrapper[4820]: I0203 12:34:00.585641 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b618165-67ac-4c04-9ccd-5e36c48c7b75-utilities" (OuterVolumeSpecName: "utilities") pod "7b618165-67ac-4c04-9ccd-5e36c48c7b75" (UID: "7b618165-67ac-4c04-9ccd-5e36c48c7b75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:34:00 crc kubenswrapper[4820]: I0203 12:34:00.593376 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b618165-67ac-4c04-9ccd-5e36c48c7b75-kube-api-access-jgb9s" (OuterVolumeSpecName: "kube-api-access-jgb9s") pod "7b618165-67ac-4c04-9ccd-5e36c48c7b75" (UID: "7b618165-67ac-4c04-9ccd-5e36c48c7b75"). InnerVolumeSpecName "kube-api-access-jgb9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:34:00 crc kubenswrapper[4820]: I0203 12:34:00.613803 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgb9s\" (UniqueName: \"kubernetes.io/projected/7b618165-67ac-4c04-9ccd-5e36c48c7b75-kube-api-access-jgb9s\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:00 crc kubenswrapper[4820]: I0203 12:34:00.614256 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b618165-67ac-4c04-9ccd-5e36c48c7b75-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:00 crc kubenswrapper[4820]: I0203 12:34:00.687049 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b618165-67ac-4c04-9ccd-5e36c48c7b75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b618165-67ac-4c04-9ccd-5e36c48c7b75" (UID: "7b618165-67ac-4c04-9ccd-5e36c48c7b75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:34:00 crc kubenswrapper[4820]: I0203 12:34:00.716339 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b618165-67ac-4c04-9ccd-5e36c48c7b75-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:01 crc kubenswrapper[4820]: I0203 12:34:01.677139 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qjmdv_ad9b0bbe-7f17-4347-bda3-5f0a843b3997/registry-server/0.log" Feb 03 12:34:01 crc kubenswrapper[4820]: I0203 12:34:01.688274 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjmdv" event={"ID":"ad9b0bbe-7f17-4347-bda3-5f0a843b3997","Type":"ContainerStarted","Data":"a82bc0d74f422bdc141e4507d00e1ff8bc61b5c9f070b7ffbd65827950686089"} Feb 03 12:34:01 crc kubenswrapper[4820]: I0203 12:34:01.706557 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8bvg" Feb 03 12:34:01 crc kubenswrapper[4820]: I0203 12:34:01.707846 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="ceilometer-central-agent" containerID="cri-o://ebce3229a465f44af1a54bb582fc434d596fd8f0c376b0bba7fccd7dea94fc98" gracePeriod=30 Feb 03 12:34:01 crc kubenswrapper[4820]: I0203 12:34:01.708341 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc2385eb-3720-486c-a1e6-de8d39b81012","Type":"ContainerStarted","Data":"a2a2b1cfcfc6537c32cd1a307233a174652c5c647ea10b719894c0502b78b49d"} Feb 03 12:34:01 crc kubenswrapper[4820]: I0203 12:34:01.708412 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:34:01 crc kubenswrapper[4820]: I0203 12:34:01.708471 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="proxy-httpd" containerID="cri-o://a2a2b1cfcfc6537c32cd1a307233a174652c5c647ea10b719894c0502b78b49d" gracePeriod=30 Feb 03 12:34:01 crc kubenswrapper[4820]: I0203 12:34:01.708549 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="sg-core" containerID="cri-o://25fe8caa2ebf0b88444055d54c8b1bbf17afd4176480ee015377919752186d34" gracePeriod=30 Feb 03 12:34:01 crc kubenswrapper[4820]: I0203 12:34:01.708605 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="ceilometer-notification-agent" containerID="cri-o://f949cbac1cb513adc0dfabe66243b81171539e8a232158bf45721e44acbbb55a" gracePeriod=30 Feb 03 12:34:01 crc kubenswrapper[4820]: I0203 12:34:01.773629 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d8bvg"] Feb 03 12:34:01 crc kubenswrapper[4820]: I0203 12:34:01.804125 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d8bvg"] Feb 03 12:34:02 crc kubenswrapper[4820]: I0203 12:34:02.061337 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.601218328 podStartE2EDuration="21.061313213s" podCreationTimestamp="2026-02-03 12:33:41 +0000 UTC" firstStartedPulling="2026-02-03 12:33:42.708438534 +0000 UTC m=+1740.231514398" lastFinishedPulling="2026-02-03 12:34:00.168533419 +0000 UTC m=+1757.691609283" observedRunningTime="2026-02-03 12:34:01.797420263 +0000 UTC m=+1759.320496127" watchObservedRunningTime="2026-02-03 12:34:02.061313213 +0000 UTC m=+1759.584389077" Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.095530 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerID="25fe8caa2ebf0b88444055d54c8b1bbf17afd4176480ee015377919752186d34" exitCode=2 Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.095825 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerID="ebce3229a465f44af1a54bb582fc434d596fd8f0c376b0bba7fccd7dea94fc98" exitCode=0 Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.097049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc2385eb-3720-486c-a1e6-de8d39b81012","Type":"ContainerDied","Data":"25fe8caa2ebf0b88444055d54c8b1bbf17afd4176480ee015377919752186d34"} Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.097084 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc2385eb-3720-486c-a1e6-de8d39b81012","Type":"ContainerDied","Data":"ebce3229a465f44af1a54bb582fc434d596fd8f0c376b0bba7fccd7dea94fc98"} Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.128248 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.167372 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" path="/var/lib/kubelet/pods/7b618165-67ac-4c04-9ccd-5e36c48c7b75/volumes" Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.673514 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.861682 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.872003 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csbcd\" (UniqueName: \"kubernetes.io/projected/cd46da3e-bb82-4990-8d29-03f53c601f36-kube-api-access-csbcd\") pod \"cd46da3e-bb82-4990-8d29-03f53c601f36\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.872086 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-custom-prometheus-ca\") pod \"cd46da3e-bb82-4990-8d29-03f53c601f36\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.872113 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-config-data\") pod \"cd46da3e-bb82-4990-8d29-03f53c601f36\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.872159 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd46da3e-bb82-4990-8d29-03f53c601f36-logs\") pod \"cd46da3e-bb82-4990-8d29-03f53c601f36\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.872236 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-combined-ca-bundle\") pod \"cd46da3e-bb82-4990-8d29-03f53c601f36\" (UID: \"cd46da3e-bb82-4990-8d29-03f53c601f36\") " Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.872943 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd46da3e-bb82-4990-8d29-03f53c601f36-logs" (OuterVolumeSpecName: "logs") pod "cd46da3e-bb82-4990-8d29-03f53c601f36" (UID: "cd46da3e-bb82-4990-8d29-03f53c601f36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.881025 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd46da3e-bb82-4990-8d29-03f53c601f36-kube-api-access-csbcd" (OuterVolumeSpecName: "kube-api-access-csbcd") pod "cd46da3e-bb82-4990-8d29-03f53c601f36" (UID: "cd46da3e-bb82-4990-8d29-03f53c601f36"). InnerVolumeSpecName "kube-api-access-csbcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.918930 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "cd46da3e-bb82-4990-8d29-03f53c601f36" (UID: "cd46da3e-bb82-4990-8d29-03f53c601f36"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.933860 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd46da3e-bb82-4990-8d29-03f53c601f36" (UID: "cd46da3e-bb82-4990-8d29-03f53c601f36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.976437 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csbcd\" (UniqueName: \"kubernetes.io/projected/cd46da3e-bb82-4990-8d29-03f53c601f36-kube-api-access-csbcd\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.976488 4820 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.976502 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd46da3e-bb82-4990-8d29-03f53c601f36-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.976515 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:03 crc kubenswrapper[4820]: I0203 12:34:03.985970 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-config-data" (OuterVolumeSpecName: "config-data") pod "cd46da3e-bb82-4990-8d29-03f53c601f36" (UID: "cd46da3e-bb82-4990-8d29-03f53c601f36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.078987 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd46da3e-bb82-4990-8d29-03f53c601f36-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.111559 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerID="f949cbac1cb513adc0dfabe66243b81171539e8a232158bf45721e44acbbb55a" exitCode=0 Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.111652 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc2385eb-3720-486c-a1e6-de8d39b81012","Type":"ContainerDied","Data":"f949cbac1cb513adc0dfabe66243b81171539e8a232158bf45721e44acbbb55a"} Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.113982 4820 generic.go:334] "Generic (PLEG): container finished" podID="cd46da3e-bb82-4990-8d29-03f53c601f36" containerID="76b337ba9889fcd2d04101da32a8144cfa4c5201045390fad76182170e8b3ad0" exitCode=0 Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.114043 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.114065 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"cd46da3e-bb82-4990-8d29-03f53c601f36","Type":"ContainerDied","Data":"76b337ba9889fcd2d04101da32a8144cfa4c5201045390fad76182170e8b3ad0"} Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.114112 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"cd46da3e-bb82-4990-8d29-03f53c601f36","Type":"ContainerDied","Data":"34f192d25c3849d5570abba516d21ab74bfe6f71f38d2b880eed43669213e3e1"} Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.114134 4820 scope.go:117] "RemoveContainer" containerID="76b337ba9889fcd2d04101da32a8144cfa4c5201045390fad76182170e8b3ad0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.174563 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.174742 4820 scope.go:117] "RemoveContainer" containerID="76b337ba9889fcd2d04101da32a8144cfa4c5201045390fad76182170e8b3ad0" Feb 03 12:34:04 crc kubenswrapper[4820]: E0203 12:34:04.178686 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b337ba9889fcd2d04101da32a8144cfa4c5201045390fad76182170e8b3ad0\": container with ID starting with 76b337ba9889fcd2d04101da32a8144cfa4c5201045390fad76182170e8b3ad0 not found: ID does not exist" containerID="76b337ba9889fcd2d04101da32a8144cfa4c5201045390fad76182170e8b3ad0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.178761 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b337ba9889fcd2d04101da32a8144cfa4c5201045390fad76182170e8b3ad0"} err="failed to get container status \"76b337ba9889fcd2d04101da32a8144cfa4c5201045390fad76182170e8b3ad0\": rpc error: code = NotFound desc = could not find container \"76b337ba9889fcd2d04101da32a8144cfa4c5201045390fad76182170e8b3ad0\": container with ID starting with 76b337ba9889fcd2d04101da32a8144cfa4c5201045390fad76182170e8b3ad0 not found: ID does not exist" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.199981 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.211825 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 03 12:34:04 crc kubenswrapper[4820]: E0203 12:34:04.212671 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerName="extract-content" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.212704 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerName="extract-content" Feb 03 12:34:04 crc kubenswrapper[4820]: E0203 12:34:04.212768 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerName="registry-server" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.212779 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerName="registry-server" Feb 03 12:34:04 crc kubenswrapper[4820]: E0203 12:34:04.212786 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd46da3e-bb82-4990-8d29-03f53c601f36" containerName="watcher-decision-engine" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.212794 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd46da3e-bb82-4990-8d29-03f53c601f36" containerName="watcher-decision-engine" Feb 03 12:34:04 crc kubenswrapper[4820]: E0203 12:34:04.212807 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerName="extract-utilities" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.212816 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerName="extract-utilities" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.213069 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd46da3e-bb82-4990-8d29-03f53c601f36" containerName="watcher-decision-engine" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.213097 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b618165-67ac-4c04-9ccd-5e36c48c7b75" containerName="registry-server" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.214358 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.219855 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.223526 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.720081 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.720169 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.720245 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wxqh\" (UniqueName: \"kubernetes.io/projected/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-kube-api-access-2wxqh\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.720322 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-logs\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.720356 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.823492 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wxqh\" (UniqueName: \"kubernetes.io/projected/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-kube-api-access-2wxqh\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.823618 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-logs\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.823726 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.825445 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-logs\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.826216 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.826870 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.831771 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.836586 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.854235 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.861711 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wxqh\" (UniqueName: \"kubernetes.io/projected/8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe-kube-api-access-2wxqh\") pod \"watcher-decision-engine-0\" (UID: \"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe\") " pod="openstack/watcher-decision-engine-0" Feb 03 12:34:04 crc kubenswrapper[4820]: I0203 12:34:04.947015 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 03 12:34:05 crc kubenswrapper[4820]: I0203 12:34:05.445110 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd46da3e-bb82-4990-8d29-03f53c601f36" path="/var/lib/kubelet/pods/cd46da3e-bb82-4990-8d29-03f53c601f36/volumes" Feb 03 12:34:05 crc kubenswrapper[4820]: I0203 12:34:05.799685 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 03 12:34:06 crc kubenswrapper[4820]: I0203 12:34:06.348113 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:34:06 crc kubenswrapper[4820]: I0203 12:34:06.348922 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="edffb607-1bfe-4aa0-a39a-2f65dbd5077b" containerName="glance-log" containerID="cri-o://0c62c8d40e2374efec58a1d6200883c3886262bc971d0dddd6c3071627c32404" gracePeriod=30 Feb 03 12:34:06 crc kubenswrapper[4820]: I0203 12:34:06.349716 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="edffb607-1bfe-4aa0-a39a-2f65dbd5077b" containerName="glance-httpd" containerID="cri-o://f57517145c69ff58a37429d1d6da6b1320da910fa8eec9a97f24e58f0c83b1bd" gracePeriod=30 Feb 03 12:34:06 crc kubenswrapper[4820]: I0203 12:34:06.464630 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe","Type":"ContainerStarted","Data":"ef1a4d2205b9f2168690d8b0e6ae999588fb5b01405bbc55a42474c3c3d2a0c6"} Feb 03 12:34:06 crc kubenswrapper[4820]: I0203 12:34:06.465058 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe","Type":"ContainerStarted","Data":"edd69e945395c2189c9135f5d4c22b9deaf145e2b527c5be142ee3a55c6ceef1"} Feb 03 12:34:06 crc kubenswrapper[4820]: I0203 12:34:06.497356 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.497337214 podStartE2EDuration="2.497337214s" podCreationTimestamp="2026-02-03 12:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:34:06.491342692 +0000 UTC m=+1764.014418546" watchObservedRunningTime="2026-02-03 12:34:06.497337214 +0000 UTC m=+1764.020413078" Feb 03 12:34:07 crc kubenswrapper[4820]: I0203 12:34:07.683361 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:34:07 crc kubenswrapper[4820]: I0203 12:34:07.685210 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:34:07 crc kubenswrapper[4820]: I0203 12:34:07.821427 4820 generic.go:334] "Generic (PLEG): container finished" podID="edffb607-1bfe-4aa0-a39a-2f65dbd5077b" containerID="0c62c8d40e2374efec58a1d6200883c3886262bc971d0dddd6c3071627c32404" exitCode=143 Feb 03 12:34:07 crc kubenswrapper[4820]: I0203 12:34:07.822035 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edffb607-1bfe-4aa0-a39a-2f65dbd5077b","Type":"ContainerDied","Data":"0c62c8d40e2374efec58a1d6200883c3886262bc971d0dddd6c3071627c32404"} Feb 03 12:34:08 crc kubenswrapper[4820]: I0203 12:34:08.727469 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:34:08 crc kubenswrapper[4820]: I0203 12:34:08.727780 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="457bfab7-1523-4ef8-b7f1-a6d0d54351e4" containerName="glance-log" containerID="cri-o://99cc1a5e327a494a909e2431d2d79b83ea5e05bb06dacc098d8eee0beae4562d" gracePeriod=30 Feb 03 12:34:08 crc kubenswrapper[4820]: I0203 12:34:08.727863 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="457bfab7-1523-4ef8-b7f1-a6d0d54351e4" containerName="glance-httpd" containerID="cri-o://13abe95444ae1d3f04630b66ccf57bc5376461c6a05fb41beb32df66659ee407" gracePeriod=30 Feb 03 12:34:08 crc kubenswrapper[4820]: I0203 12:34:08.871676 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjmdv" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" probeResult="failure" output=< Feb 03 12:34:08 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:34:08 crc kubenswrapper[4820]: > Feb 03 12:34:10 crc kubenswrapper[4820]: I0203 12:34:10.172752 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:34:10 crc kubenswrapper[4820]: E0203 12:34:10.179264 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:34:10 crc kubenswrapper[4820]: I0203 12:34:10.235100 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-46nxk" event={"ID":"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc","Type":"ContainerStarted","Data":"bdf170be16612ff8006e51412c9af2c34cf09e6db469635780f6dc5a2ea76f20"} Feb 03 12:34:10 crc kubenswrapper[4820]: I0203 12:34:10.239450 4820 generic.go:334] "Generic (PLEG): container finished" podID="457bfab7-1523-4ef8-b7f1-a6d0d54351e4" containerID="99cc1a5e327a494a909e2431d2d79b83ea5e05bb06dacc098d8eee0beae4562d" exitCode=143 Feb 03 12:34:10 crc kubenswrapper[4820]: I0203 12:34:10.239489 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"457bfab7-1523-4ef8-b7f1-a6d0d54351e4","Type":"ContainerDied","Data":"99cc1a5e327a494a909e2431d2d79b83ea5e05bb06dacc098d8eee0beae4562d"} Feb 03 12:34:10 crc kubenswrapper[4820]: I0203 12:34:10.273254 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-46nxk" podStartSLOduration=10.766926094 podStartE2EDuration="38.273219725s" podCreationTimestamp="2026-02-03 12:33:32 +0000 UTC" firstStartedPulling="2026-02-03 12:33:41.248049676 +0000 UTC m=+1738.771125540" lastFinishedPulling="2026-02-03 12:34:08.754343307 +0000 UTC m=+1766.277419171" observedRunningTime="2026-02-03 12:34:10.267922482 +0000 UTC m=+1767.790998386" watchObservedRunningTime="2026-02-03 12:34:10.273219725 +0000 UTC m=+1767.796295589" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.359382 4820 generic.go:334] "Generic (PLEG): container finished" podID="edffb607-1bfe-4aa0-a39a-2f65dbd5077b" containerID="f57517145c69ff58a37429d1d6da6b1320da910fa8eec9a97f24e58f0c83b1bd" exitCode=0 Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.360866 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edffb607-1bfe-4aa0-a39a-2f65dbd5077b","Type":"ContainerDied","Data":"f57517145c69ff58a37429d1d6da6b1320da910fa8eec9a97f24e58f0c83b1bd"} Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.609476 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.692440 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4bf4\" (UniqueName: \"kubernetes.io/projected/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-kube-api-access-q4bf4\") pod \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.692530 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-httpd-run\") pod \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.692565 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-logs\") pod \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.692600 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-config-data\") pod \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.692640 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-scripts\") pod \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.692711 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.692790 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-public-tls-certs\") pod \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.692819 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-combined-ca-bundle\") pod \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\" (UID: \"edffb607-1bfe-4aa0-a39a-2f65dbd5077b\") " Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.693676 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "edffb607-1bfe-4aa0-a39a-2f65dbd5077b" (UID: "edffb607-1bfe-4aa0-a39a-2f65dbd5077b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.694772 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-logs" (OuterVolumeSpecName: "logs") pod "edffb607-1bfe-4aa0-a39a-2f65dbd5077b" (UID: "edffb607-1bfe-4aa0-a39a-2f65dbd5077b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.706584 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-kube-api-access-q4bf4" (OuterVolumeSpecName: "kube-api-access-q4bf4") pod "edffb607-1bfe-4aa0-a39a-2f65dbd5077b" (UID: "edffb607-1bfe-4aa0-a39a-2f65dbd5077b"). InnerVolumeSpecName "kube-api-access-q4bf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.710315 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-scripts" (OuterVolumeSpecName: "scripts") pod "edffb607-1bfe-4aa0-a39a-2f65dbd5077b" (UID: "edffb607-1bfe-4aa0-a39a-2f65dbd5077b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.710740 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "edffb607-1bfe-4aa0-a39a-2f65dbd5077b" (UID: "edffb607-1bfe-4aa0-a39a-2f65dbd5077b"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.798622 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4bf4\" (UniqueName: \"kubernetes.io/projected/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-kube-api-access-q4bf4\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.798679 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.798696 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.798707 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.798744 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.806615 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "edffb607-1bfe-4aa0-a39a-2f65dbd5077b" (UID: "edffb607-1bfe-4aa0-a39a-2f65dbd5077b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.847157 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edffb607-1bfe-4aa0-a39a-2f65dbd5077b" (UID: "edffb607-1bfe-4aa0-a39a-2f65dbd5077b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.852586 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.879722 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-config-data" (OuterVolumeSpecName: "config-data") pod "edffb607-1bfe-4aa0-a39a-2f65dbd5077b" (UID: "edffb607-1bfe-4aa0-a39a-2f65dbd5077b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.901753 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.901786 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.901796 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:11 crc kubenswrapper[4820]: I0203 12:34:11.901807 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edffb607-1bfe-4aa0-a39a-2f65dbd5077b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.114777 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.374389 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edffb607-1bfe-4aa0-a39a-2f65dbd5077b","Type":"ContainerDied","Data":"a761c4033520fcbb4f178b0c08839803c438a9ab1c9025509095c44e3052b6f2"} Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.374458 4820 scope.go:117] "RemoveContainer" containerID="f57517145c69ff58a37429d1d6da6b1320da910fa8eec9a97f24e58f0c83b1bd" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.374653 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.436181 4820 scope.go:117] "RemoveContainer" containerID="0c62c8d40e2374efec58a1d6200883c3886262bc971d0dddd6c3071627c32404" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.459046 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.491847 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.514067 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:34:12 crc kubenswrapper[4820]: E0203 12:34:12.514825 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edffb607-1bfe-4aa0-a39a-2f65dbd5077b" containerName="glance-log" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.514850 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="edffb607-1bfe-4aa0-a39a-2f65dbd5077b" containerName="glance-log" Feb 03 12:34:12 crc kubenswrapper[4820]: E0203 12:34:12.514906 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edffb607-1bfe-4aa0-a39a-2f65dbd5077b" containerName="glance-httpd" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.514917 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="edffb607-1bfe-4aa0-a39a-2f65dbd5077b" containerName="glance-httpd" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.515184 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="edffb607-1bfe-4aa0-a39a-2f65dbd5077b" containerName="glance-log" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.515204 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="edffb607-1bfe-4aa0-a39a-2f65dbd5077b" containerName="glance-httpd" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.517324 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.522110 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.522255 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.525530 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.970403 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51339dae-75ae-4857-853e-d4d0a0a1aa65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.970461 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.970863 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51339dae-75ae-4857-853e-d4d0a0a1aa65-scripts\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.971096 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51339dae-75ae-4857-853e-d4d0a0a1aa65-logs\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.971482 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51339dae-75ae-4857-853e-d4d0a0a1aa65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.971582 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqwbp\" (UniqueName: \"kubernetes.io/projected/51339dae-75ae-4857-853e-d4d0a0a1aa65-kube-api-access-bqwbp\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.971853 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51339dae-75ae-4857-853e-d4d0a0a1aa65-config-data\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:12 crc kubenswrapper[4820]: I0203 12:34:12.971937 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51339dae-75ae-4857-853e-d4d0a0a1aa65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.073857 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqwbp\" (UniqueName: \"kubernetes.io/projected/51339dae-75ae-4857-853e-d4d0a0a1aa65-kube-api-access-bqwbp\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.074334 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51339dae-75ae-4857-853e-d4d0a0a1aa65-config-data\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.074382 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51339dae-75ae-4857-853e-d4d0a0a1aa65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.074436 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51339dae-75ae-4857-853e-d4d0a0a1aa65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.074480 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.074536 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51339dae-75ae-4857-853e-d4d0a0a1aa65-scripts\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.074591 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51339dae-75ae-4857-853e-d4d0a0a1aa65-logs\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.074640 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51339dae-75ae-4857-853e-d4d0a0a1aa65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.076264 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.080106 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51339dae-75ae-4857-853e-d4d0a0a1aa65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.081606 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51339dae-75ae-4857-853e-d4d0a0a1aa65-logs\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.089128 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51339dae-75ae-4857-853e-d4d0a0a1aa65-config-data\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.090789 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51339dae-75ae-4857-853e-d4d0a0a1aa65-scripts\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.091824 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51339dae-75ae-4857-853e-d4d0a0a1aa65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.099706 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51339dae-75ae-4857-853e-d4d0a0a1aa65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.127696 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.128101 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.129371 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"0f246c95365efface890ae91ed527bfb457eda5dc7b27fa8cda294f51a37e248"} pod="openstack/horizon-5fdc8588b4-jtjr8" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.129619 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" containerID="cri-o://0f246c95365efface890ae91ed527bfb457eda5dc7b27fa8cda294f51a37e248" gracePeriod=30 Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.159446 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqwbp\" (UniqueName: \"kubernetes.io/projected/51339dae-75ae-4857-853e-d4d0a0a1aa65-kube-api-access-bqwbp\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.180430 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"51339dae-75ae-4857-853e-d4d0a0a1aa65\") " pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.190804 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edffb607-1bfe-4aa0-a39a-2f65dbd5077b" path="/var/lib/kubelet/pods/edffb607-1bfe-4aa0-a39a-2f65dbd5077b/volumes" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.210355 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.404922 4820 generic.go:334] "Generic (PLEG): container finished" podID="457bfab7-1523-4ef8-b7f1-a6d0d54351e4" containerID="13abe95444ae1d3f04630b66ccf57bc5376461c6a05fb41beb32df66659ee407" exitCode=0 Feb 03 12:34:13 crc kubenswrapper[4820]: I0203 12:34:13.405284 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"457bfab7-1523-4ef8-b7f1-a6d0d54351e4","Type":"ContainerDied","Data":"13abe95444ae1d3f04630b66ccf57bc5376461c6a05fb41beb32df66659ee407"} Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.018053 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.018186 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.019602 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"9c7a577e87b3e83c7e349bf9ccd38e1f5613ee686a7353fa8aac276143a6016b"} pod="openstack/horizon-68b4df5bdd-tdb9h" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.019665 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" containerID="cri-o://9c7a577e87b3e83c7e349bf9ccd38e1f5613ee686a7353fa8aac276143a6016b" gracePeriod=30 Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.106828 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.226259 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.226473 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-combined-ca-bundle\") pod \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.226514 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-scripts\") pod \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.226691 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-logs\") pod \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.226723 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-config-data\") pod \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.226747 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-httpd-run\") pod \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.226869 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-internal-tls-certs\") pod \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.234397 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2nf9\" (UniqueName: \"kubernetes.io/projected/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-kube-api-access-q2nf9\") pod \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\" (UID: \"457bfab7-1523-4ef8-b7f1-a6d0d54351e4\") " Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.242642 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-logs" (OuterVolumeSpecName: "logs") pod "457bfab7-1523-4ef8-b7f1-a6d0d54351e4" (UID: "457bfab7-1523-4ef8-b7f1-a6d0d54351e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.245489 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "457bfab7-1523-4ef8-b7f1-a6d0d54351e4" (UID: "457bfab7-1523-4ef8-b7f1-a6d0d54351e4"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.246371 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "457bfab7-1523-4ef8-b7f1-a6d0d54351e4" (UID: "457bfab7-1523-4ef8-b7f1-a6d0d54351e4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.265491 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-scripts" (OuterVolumeSpecName: "scripts") pod "457bfab7-1523-4ef8-b7f1-a6d0d54351e4" (UID: "457bfab7-1523-4ef8-b7f1-a6d0d54351e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.280817 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-kube-api-access-q2nf9" (OuterVolumeSpecName: "kube-api-access-q2nf9") pod "457bfab7-1523-4ef8-b7f1-a6d0d54351e4" (UID: "457bfab7-1523-4ef8-b7f1-a6d0d54351e4"). InnerVolumeSpecName "kube-api-access-q2nf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.290164 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "457bfab7-1523-4ef8-b7f1-a6d0d54351e4" (UID: "457bfab7-1523-4ef8-b7f1-a6d0d54351e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.695280 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.695336 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.695347 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2nf9\" (UniqueName: \"kubernetes.io/projected/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-kube-api-access-q2nf9\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.695402 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.695411 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.695422 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.788025 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"457bfab7-1523-4ef8-b7f1-a6d0d54351e4","Type":"ContainerDied","Data":"d45bdb4770b445ffc97f0ec96c8b9a5d9365d92d7fbbf9cea253be97efaec3ea"} Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.788593 4820 scope.go:117] "RemoveContainer" containerID="13abe95444ae1d3f04630b66ccf57bc5376461c6a05fb41beb32df66659ee407" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.788468 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.800650 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.801264 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.809240 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "457bfab7-1523-4ef8-b7f1-a6d0d54351e4" (UID: "457bfab7-1523-4ef8-b7f1-a6d0d54351e4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.908425 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.912279 4820 scope.go:117] "RemoveContainer" containerID="99cc1a5e327a494a909e2431d2d79b83ea5e05bb06dacc098d8eee0beae4562d" Feb 03 12:34:14 crc kubenswrapper[4820]: I0203 12:34:14.950707 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.010066 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.106229 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-config-data" (OuterVolumeSpecName: "config-data") pod "457bfab7-1523-4ef8-b7f1-a6d0d54351e4" (UID: "457bfab7-1523-4ef8-b7f1-a6d0d54351e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.113850 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457bfab7-1523-4ef8-b7f1-a6d0d54351e4-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.124757 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:34:15 crc kubenswrapper[4820]: W0203 12:34:15.198760 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51339dae_75ae_4857_853e_d4d0a0a1aa65.slice/crio-eb0ea90b2a7f5c18c5b554f7efd2ae3faf7b5dac6f8479f516dde692835bb52b WatchSource:0}: Error finding container eb0ea90b2a7f5c18c5b554f7efd2ae3faf7b5dac6f8479f516dde692835bb52b: Status 404 returned error can't find the container with id eb0ea90b2a7f5c18c5b554f7efd2ae3faf7b5dac6f8479f516dde692835bb52b Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.839926 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.850476 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51339dae-75ae-4857-853e-d4d0a0a1aa65","Type":"ContainerStarted","Data":"eb0ea90b2a7f5c18c5b554f7efd2ae3faf7b5dac6f8479f516dde692835bb52b"} Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.856634 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.863614 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.873906 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:34:15 crc kubenswrapper[4820]: E0203 12:34:15.874652 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457bfab7-1523-4ef8-b7f1-a6d0d54351e4" containerName="glance-httpd" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.874673 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="457bfab7-1523-4ef8-b7f1-a6d0d54351e4" containerName="glance-httpd" Feb 03 12:34:15 crc kubenswrapper[4820]: E0203 12:34:15.874705 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457bfab7-1523-4ef8-b7f1-a6d0d54351e4" containerName="glance-log" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.874715 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="457bfab7-1523-4ef8-b7f1-a6d0d54351e4" containerName="glance-log" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.874930 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="457bfab7-1523-4ef8-b7f1-a6d0d54351e4" containerName="glance-log" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.874958 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="457bfab7-1523-4ef8-b7f1-a6d0d54351e4" containerName="glance-httpd" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.876253 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.885883 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.890318 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.890584 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.986500 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mfgd\" (UniqueName: \"kubernetes.io/projected/227e62a0-37fd-4e52-ae44-df01b13d4b32-kube-api-access-7mfgd\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.995818 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/227e62a0-37fd-4e52-ae44-df01b13d4b32-config-data\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.996273 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/227e62a0-37fd-4e52-ae44-df01b13d4b32-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.996594 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/227e62a0-37fd-4e52-ae44-df01b13d4b32-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.997192 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/227e62a0-37fd-4e52-ae44-df01b13d4b32-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.997465 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.998623 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/227e62a0-37fd-4e52-ae44-df01b13d4b32-scripts\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:15 crc kubenswrapper[4820]: I0203 12:34:15.998765 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/227e62a0-37fd-4e52-ae44-df01b13d4b32-logs\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.030252 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.102047 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/227e62a0-37fd-4e52-ae44-df01b13d4b32-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.103478 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/227e62a0-37fd-4e52-ae44-df01b13d4b32-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.103550 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.103666 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/227e62a0-37fd-4e52-ae44-df01b13d4b32-scripts\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.103706 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/227e62a0-37fd-4e52-ae44-df01b13d4b32-logs\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.103788 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mfgd\" (UniqueName: \"kubernetes.io/projected/227e62a0-37fd-4e52-ae44-df01b13d4b32-kube-api-access-7mfgd\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.104385 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/227e62a0-37fd-4e52-ae44-df01b13d4b32-config-data\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.104418 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/227e62a0-37fd-4e52-ae44-df01b13d4b32-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.105257 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/227e62a0-37fd-4e52-ae44-df01b13d4b32-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.106384 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.106727 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/227e62a0-37fd-4e52-ae44-df01b13d4b32-logs\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.110948 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/227e62a0-37fd-4e52-ae44-df01b13d4b32-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.118260 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/227e62a0-37fd-4e52-ae44-df01b13d4b32-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.126716 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/227e62a0-37fd-4e52-ae44-df01b13d4b32-config-data\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.130503 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mfgd\" (UniqueName: \"kubernetes.io/projected/227e62a0-37fd-4e52-ae44-df01b13d4b32-kube-api-access-7mfgd\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.142169 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/227e62a0-37fd-4e52-ae44-df01b13d4b32-scripts\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.195270 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"227e62a0-37fd-4e52-ae44-df01b13d4b32\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:34:16 crc kubenswrapper[4820]: I0203 12:34:16.243515 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:34:17 crc kubenswrapper[4820]: I0203 12:34:17.182722 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="457bfab7-1523-4ef8-b7f1-a6d0d54351e4" path="/var/lib/kubelet/pods/457bfab7-1523-4ef8-b7f1-a6d0d54351e4/volumes" Feb 03 12:34:17 crc kubenswrapper[4820]: I0203 12:34:17.883055 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:34:17 crc kubenswrapper[4820]: W0203 12:34:17.899188 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod227e62a0_37fd_4e52_ae44_df01b13d4b32.slice/crio-89f8f69956e2fba6e314b619f76e94479ad0c03e7ef78575fb651e1e55dc56ec WatchSource:0}: Error finding container 89f8f69956e2fba6e314b619f76e94479ad0c03e7ef78575fb651e1e55dc56ec: Status 404 returned error can't find the container with id 89f8f69956e2fba6e314b619f76e94479ad0c03e7ef78575fb651e1e55dc56ec Feb 03 12:34:17 crc kubenswrapper[4820]: I0203 12:34:17.967932 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51339dae-75ae-4857-853e-d4d0a0a1aa65","Type":"ContainerStarted","Data":"0fe4f700c20040f1b93808ab423bc9853be41172048b2c7f93bbf882d873e802"} Feb 03 12:34:17 crc kubenswrapper[4820]: I0203 12:34:17.988312 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"227e62a0-37fd-4e52-ae44-df01b13d4b32","Type":"ContainerStarted","Data":"89f8f69956e2fba6e314b619f76e94479ad0c03e7ef78575fb651e1e55dc56ec"} Feb 03 12:34:18 crc kubenswrapper[4820]: I0203 12:34:18.766267 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjmdv" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" probeResult="failure" output=< Feb 03 12:34:18 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:34:18 crc kubenswrapper[4820]: > Feb 03 12:34:19 crc kubenswrapper[4820]: I0203 12:34:19.348975 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51339dae-75ae-4857-853e-d4d0a0a1aa65","Type":"ContainerStarted","Data":"3d437e42ad511b734c42521d92780a1a916161599d3ac08704dd43b8b81fcc6e"} Feb 03 12:34:19 crc kubenswrapper[4820]: I0203 12:34:19.358398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"227e62a0-37fd-4e52-ae44-df01b13d4b32","Type":"ContainerStarted","Data":"2fd62b9deb199a4ae28ab4f0aa3f10c5d8e3b0ab7d117b9d86136ddb35401925"} Feb 03 12:34:19 crc kubenswrapper[4820]: I0203 12:34:19.413841 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.41381679 podStartE2EDuration="7.41381679s" podCreationTimestamp="2026-02-03 12:34:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:34:19.402062902 +0000 UTC m=+1776.925138776" watchObservedRunningTime="2026-02-03 12:34:19.41381679 +0000 UTC m=+1776.936892664" Feb 03 12:34:21 crc kubenswrapper[4820]: I0203 12:34:21.457718 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"227e62a0-37fd-4e52-ae44-df01b13d4b32","Type":"ContainerStarted","Data":"39cc30e925b82bb00c791a54b3233b9cba1cd64dd52afc7024c57662449f311a"} Feb 03 12:34:21 crc kubenswrapper[4820]: I0203 12:34:21.525396 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.525352741 podStartE2EDuration="6.525352741s" podCreationTimestamp="2026-02-03 12:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:34:21.495690337 +0000 UTC m=+1779.018766211" watchObservedRunningTime="2026-02-03 12:34:21.525352741 +0000 UTC m=+1779.048428625" Feb 03 12:34:23 crc kubenswrapper[4820]: I0203 12:34:23.152226 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:34:23 crc kubenswrapper[4820]: E0203 12:34:23.152870 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:34:23 crc kubenswrapper[4820]: I0203 12:34:23.211988 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 12:34:23 crc kubenswrapper[4820]: I0203 12:34:23.212078 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 12:34:23 crc kubenswrapper[4820]: I0203 12:34:23.254645 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 12:34:23 crc kubenswrapper[4820]: I0203 12:34:23.722238 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 12:34:23 crc kubenswrapper[4820]: I0203 12:34:23.901720 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 12:34:23 crc kubenswrapper[4820]: I0203 12:34:23.902157 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 12:34:26 crc kubenswrapper[4820]: I0203 12:34:26.245225 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 12:34:26 crc kubenswrapper[4820]: I0203 12:34:26.247394 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 12:34:26 crc kubenswrapper[4820]: I0203 12:34:26.287795 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 12:34:26 crc kubenswrapper[4820]: I0203 12:34:26.294410 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 12:34:27 crc kubenswrapper[4820]: I0203 12:34:27.076075 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 12:34:27 crc kubenswrapper[4820]: I0203 12:34:27.076644 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 12:34:27 crc kubenswrapper[4820]: I0203 12:34:27.372836 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:34:27 crc kubenswrapper[4820]: I0203 12:34:27.809025 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:34:27 crc kubenswrapper[4820]: I0203 12:34:27.881519 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qjmdv"] Feb 03 12:34:28 crc kubenswrapper[4820]: I0203 12:34:28.298077 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 12:34:28 crc kubenswrapper[4820]: I0203 12:34:28.298259 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:34:28 crc kubenswrapper[4820]: I0203 12:34:28.302776 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 12:34:29 crc kubenswrapper[4820]: I0203 12:34:29.101107 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qjmdv" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" containerID="cri-o://a82bc0d74f422bdc141e4507d00e1ff8bc61b5c9f070b7ffbd65827950686089" gracePeriod=2 Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.124547 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qjmdv_ad9b0bbe-7f17-4347-bda3-5f0a843b3997/registry-server/0.log" Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.131528 4820 generic.go:334] "Generic (PLEG): container finished" podID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerID="a82bc0d74f422bdc141e4507d00e1ff8bc61b5c9f070b7ffbd65827950686089" exitCode=0 Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.131582 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjmdv" event={"ID":"ad9b0bbe-7f17-4347-bda3-5f0a843b3997","Type":"ContainerDied","Data":"a82bc0d74f422bdc141e4507d00e1ff8bc61b5c9f070b7ffbd65827950686089"} Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.131625 4820 scope.go:117] "RemoveContainer" containerID="b3683503afb588121c1584cbdd117a260ae2352c9eaf4c990911d9c4f1fc17cf" Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.329312 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.366397 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-utilities\") pod \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\" (UID: \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\") " Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.370496 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-utilities" (OuterVolumeSpecName: "utilities") pod "ad9b0bbe-7f17-4347-bda3-5f0a843b3997" (UID: "ad9b0bbe-7f17-4347-bda3-5f0a843b3997"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.371129 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-catalog-content\") pod \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\" (UID: \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\") " Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.382097 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfzw8\" (UniqueName: \"kubernetes.io/projected/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-kube-api-access-hfzw8\") pod \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\" (UID: \"ad9b0bbe-7f17-4347-bda3-5f0a843b3997\") " Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.384243 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.406164 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-kube-api-access-hfzw8" (OuterVolumeSpecName: "kube-api-access-hfzw8") pod "ad9b0bbe-7f17-4347-bda3-5f0a843b3997" (UID: "ad9b0bbe-7f17-4347-bda3-5f0a843b3997"). InnerVolumeSpecName "kube-api-access-hfzw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.487058 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfzw8\" (UniqueName: \"kubernetes.io/projected/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-kube-api-access-hfzw8\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.542781 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad9b0bbe-7f17-4347-bda3-5f0a843b3997" (UID: "ad9b0bbe-7f17-4347-bda3-5f0a843b3997"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.589499 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad9b0bbe-7f17-4347-bda3-5f0a843b3997-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.614926 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 12:34:30 crc kubenswrapper[4820]: I0203 12:34:30.615088 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:34:31 crc kubenswrapper[4820]: I0203 12:34:31.037989 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 12:34:31 crc kubenswrapper[4820]: I0203 12:34:31.158242 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjmdv" Feb 03 12:34:31 crc kubenswrapper[4820]: I0203 12:34:31.183410 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjmdv" event={"ID":"ad9b0bbe-7f17-4347-bda3-5f0a843b3997","Type":"ContainerDied","Data":"6d177662b1accc95dfe78c3bd1a60e46e16351630635d404091eef6a5c6b5047"} Feb 03 12:34:31 crc kubenswrapper[4820]: I0203 12:34:31.183558 4820 scope.go:117] "RemoveContainer" containerID="a82bc0d74f422bdc141e4507d00e1ff8bc61b5c9f070b7ffbd65827950686089" Feb 03 12:34:31 crc kubenswrapper[4820]: I0203 12:34:31.257897 4820 scope.go:117] "RemoveContainer" containerID="01e6ac6cee71bdfaf782f5e5cd48f0af74d6e75badd32b53a759275905585df0" Feb 03 12:34:31 crc kubenswrapper[4820]: I0203 12:34:31.268834 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qjmdv"] Feb 03 12:34:31 crc kubenswrapper[4820]: I0203 12:34:31.308416 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qjmdv"] Feb 03 12:34:31 crc kubenswrapper[4820]: I0203 12:34:31.323101 4820 scope.go:117] "RemoveContainer" containerID="e52c95d02c9ad3f15bfa6263310451fe9e8dbe757b14fe47f62b77c9236d90a2" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.177985 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerID="a2a2b1cfcfc6537c32cd1a307233a174652c5c647ea10b719894c0502b78b49d" exitCode=137 Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.178086 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc2385eb-3720-486c-a1e6-de8d39b81012","Type":"ContainerDied","Data":"a2a2b1cfcfc6537c32cd1a307233a174652c5c647ea10b719894c0502b78b49d"} Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.178461 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc2385eb-3720-486c-a1e6-de8d39b81012","Type":"ContainerDied","Data":"543ad3c5118b840710e5847e2c42093bab4352d4b4ddb7aa346cbea824a5ee9f"} Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.178487 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="543ad3c5118b840710e5847e2c42093bab4352d4b4ddb7aa346cbea824a5ee9f" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.208748 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.223816 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc2385eb-3720-486c-a1e6-de8d39b81012-log-httpd\") pod \"dc2385eb-3720-486c-a1e6-de8d39b81012\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.223918 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc2385eb-3720-486c-a1e6-de8d39b81012-run-httpd\") pod \"dc2385eb-3720-486c-a1e6-de8d39b81012\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.224192 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-combined-ca-bundle\") pod \"dc2385eb-3720-486c-a1e6-de8d39b81012\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.224271 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc2385eb-3720-486c-a1e6-de8d39b81012-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dc2385eb-3720-486c-a1e6-de8d39b81012" (UID: "dc2385eb-3720-486c-a1e6-de8d39b81012"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.224280 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-config-data\") pod \"dc2385eb-3720-486c-a1e6-de8d39b81012\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.224484 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc2385eb-3720-486c-a1e6-de8d39b81012-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dc2385eb-3720-486c-a1e6-de8d39b81012" (UID: "dc2385eb-3720-486c-a1e6-de8d39b81012"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.224527 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-sg-core-conf-yaml\") pod \"dc2385eb-3720-486c-a1e6-de8d39b81012\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.224940 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpnsv\" (UniqueName: \"kubernetes.io/projected/dc2385eb-3720-486c-a1e6-de8d39b81012-kube-api-access-bpnsv\") pod \"dc2385eb-3720-486c-a1e6-de8d39b81012\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.225072 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-scripts\") pod \"dc2385eb-3720-486c-a1e6-de8d39b81012\" (UID: \"dc2385eb-3720-486c-a1e6-de8d39b81012\") " Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.226299 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc2385eb-3720-486c-a1e6-de8d39b81012-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.226328 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc2385eb-3720-486c-a1e6-de8d39b81012-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.231776 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-scripts" (OuterVolumeSpecName: "scripts") pod "dc2385eb-3720-486c-a1e6-de8d39b81012" (UID: "dc2385eb-3720-486c-a1e6-de8d39b81012"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.232551 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc2385eb-3720-486c-a1e6-de8d39b81012-kube-api-access-bpnsv" (OuterVolumeSpecName: "kube-api-access-bpnsv") pod "dc2385eb-3720-486c-a1e6-de8d39b81012" (UID: "dc2385eb-3720-486c-a1e6-de8d39b81012"). InnerVolumeSpecName "kube-api-access-bpnsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.275457 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dc2385eb-3720-486c-a1e6-de8d39b81012" (UID: "dc2385eb-3720-486c-a1e6-de8d39b81012"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.327242 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc2385eb-3720-486c-a1e6-de8d39b81012" (UID: "dc2385eb-3720-486c-a1e6-de8d39b81012"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.329396 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.329435 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpnsv\" (UniqueName: \"kubernetes.io/projected/dc2385eb-3720-486c-a1e6-de8d39b81012-kube-api-access-bpnsv\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.329449 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.329461 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.397346 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-config-data" (OuterVolumeSpecName: "config-data") pod "dc2385eb-3720-486c-a1e6-de8d39b81012" (UID: "dc2385eb-3720-486c-a1e6-de8d39b81012"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:32 crc kubenswrapper[4820]: I0203 12:34:32.432426 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc2385eb-3720-486c-a1e6-de8d39b81012-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.219259 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.222491 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" path="/var/lib/kubelet/pods/ad9b0bbe-7f17-4347-bda3-5f0a843b3997/volumes" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.278309 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.299704 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.325108 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:34:33 crc kubenswrapper[4820]: E0203 12:34:33.325935 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="proxy-httpd" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.325973 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="proxy-httpd" Feb 03 12:34:33 crc kubenswrapper[4820]: E0203 12:34:33.326008 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="ceilometer-central-agent" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.326017 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="ceilometer-central-agent" Feb 03 12:34:33 crc kubenswrapper[4820]: E0203 12:34:33.326029 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="sg-core" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.326037 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="sg-core" Feb 03 12:34:33 crc kubenswrapper[4820]: E0203 12:34:33.326057 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.326069 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" Feb 03 12:34:33 crc kubenswrapper[4820]: E0203 12:34:33.326089 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.326099 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" Feb 03 12:34:33 crc kubenswrapper[4820]: E0203 12:34:33.326118 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="extract-utilities" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.326126 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="extract-utilities" Feb 03 12:34:33 crc kubenswrapper[4820]: E0203 12:34:33.326137 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="extract-content" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.326148 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="extract-content" Feb 03 12:34:33 crc kubenswrapper[4820]: E0203 12:34:33.326178 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="ceilometer-notification-agent" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.326187 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="ceilometer-notification-agent" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.326480 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="proxy-httpd" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.326518 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="sg-core" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.326535 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.326549 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad9b0bbe-7f17-4347-bda3-5f0a843b3997" containerName="registry-server" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.326563 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="ceilometer-central-agent" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.326587 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" containerName="ceilometer-notification-agent" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.331057 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.339510 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.341197 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.346466 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.390191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-scripts\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.390278 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67s26\" (UniqueName: \"kubernetes.io/projected/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-kube-api-access-67s26\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.390391 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.390662 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-log-httpd\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.390734 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-run-httpd\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.390760 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-config-data\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.390809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.624810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-run-httpd\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.624908 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-config-data\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.624971 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.625380 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-scripts\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.625500 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67s26\" (UniqueName: \"kubernetes.io/projected/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-kube-api-access-67s26\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.625532 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.625569 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-run-httpd\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.625761 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-log-httpd\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.626367 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-log-httpd\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.632448 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.634541 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-config-data\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.636300 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-scripts\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.653295 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67s26\" (UniqueName: \"kubernetes.io/projected/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-kube-api-access-67s26\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.656938 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " pod="openstack/ceilometer-0" Feb 03 12:34:33 crc kubenswrapper[4820]: I0203 12:34:33.667811 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:34:34 crc kubenswrapper[4820]: I0203 12:34:34.169813 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:34:34 crc kubenswrapper[4820]: I0203 12:34:34.234941 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2","Type":"ContainerStarted","Data":"a0241b6d36a6ac3ccf0539f8f6d159f735c2e53f56eafbb9c377960b26e8be3d"} Feb 03 12:34:35 crc kubenswrapper[4820]: I0203 12:34:35.157184 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc2385eb-3720-486c-a1e6-de8d39b81012" path="/var/lib/kubelet/pods/dc2385eb-3720-486c-a1e6-de8d39b81012/volumes" Feb 03 12:34:35 crc kubenswrapper[4820]: I0203 12:34:35.247165 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2","Type":"ContainerStarted","Data":"465ff03b2ca70fbdd3d10d95bd4c4be128cc1c465b3b4a0fad006b81b4bd36be"} Feb 03 12:34:36 crc kubenswrapper[4820]: I0203 12:34:36.262500 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2","Type":"ContainerStarted","Data":"f4fc507cd388efa2a49573fdcbfa7bf757e12ec2c473b2be28ae886e813ba750"} Feb 03 12:34:36 crc kubenswrapper[4820]: I0203 12:34:36.262817 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2","Type":"ContainerStarted","Data":"0b1773f7ae32ea07b74ba0043ee70a6621c9a349fe4ccc6d5db6db768e0be7fb"} Feb 03 12:34:37 crc kubenswrapper[4820]: I0203 12:34:37.145421 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:34:37 crc kubenswrapper[4820]: E0203 12:34:37.146079 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:34:38 crc kubenswrapper[4820]: I0203 12:34:38.289287 4820 generic.go:334] "Generic (PLEG): container finished" podID="d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc" containerID="bdf170be16612ff8006e51412c9af2c34cf09e6db469635780f6dc5a2ea76f20" exitCode=0 Feb 03 12:34:38 crc kubenswrapper[4820]: I0203 12:34:38.289435 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-46nxk" event={"ID":"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc","Type":"ContainerDied","Data":"bdf170be16612ff8006e51412c9af2c34cf09e6db469635780f6dc5a2ea76f20"} Feb 03 12:34:39 crc kubenswrapper[4820]: I0203 12:34:39.303706 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2","Type":"ContainerStarted","Data":"2593d50af9746ad6d6d1a970a01c1509755c275f1991b4dec341cd3a990e342f"} Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.017066 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.042201 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.641952516 podStartE2EDuration="7.042177617s" podCreationTimestamp="2026-02-03 12:34:33 +0000 UTC" firstStartedPulling="2026-02-03 12:34:34.17615387 +0000 UTC m=+1791.699229734" lastFinishedPulling="2026-02-03 12:34:38.576378961 +0000 UTC m=+1796.099454835" observedRunningTime="2026-02-03 12:34:39.34250098 +0000 UTC m=+1796.865576844" watchObservedRunningTime="2026-02-03 12:34:40.042177617 +0000 UTC m=+1797.565253471" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.165929 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvcqc\" (UniqueName: \"kubernetes.io/projected/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-kube-api-access-qvcqc\") pod \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.166150 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-config-data\") pod \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.167217 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-combined-ca-bundle\") pod \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.167371 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-scripts\") pod \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\" (UID: \"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc\") " Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.173548 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-kube-api-access-qvcqc" (OuterVolumeSpecName: "kube-api-access-qvcqc") pod "d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc" (UID: "d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc"). InnerVolumeSpecName "kube-api-access-qvcqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.192544 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvcqc\" (UniqueName: \"kubernetes.io/projected/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-kube-api-access-qvcqc\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.194392 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-scripts" (OuterVolumeSpecName: "scripts") pod "d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc" (UID: "d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.261011 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-config-data" (OuterVolumeSpecName: "config-data") pod "d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc" (UID: "d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.269419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc" (UID: "d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.294561 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.294605 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.294623 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.318255 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-46nxk" event={"ID":"d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc","Type":"ContainerDied","Data":"180dd92d13c209289ae9079eb0b41396bc65ec0c12a7f6303fdc45ead4756b67"} Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.318321 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="180dd92d13c209289ae9079eb0b41396bc65ec0c12a7f6303fdc45ead4756b67" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.318700 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.319516 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-46nxk" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.509333 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 12:34:40 crc kubenswrapper[4820]: E0203 12:34:40.510004 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc" containerName="nova-cell0-conductor-db-sync" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.510029 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc" containerName="nova-cell0-conductor-db-sync" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.510287 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc" containerName="nova-cell0-conductor-db-sync" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.514395 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.517319 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.517801 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-j9sk2" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.527606 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.600334 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.600440 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmlcc\" (UniqueName: \"kubernetes.io/projected/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-kube-api-access-mmlcc\") pod \"nova-cell0-conductor-0\" (UID: \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.600484 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.702240 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmlcc\" (UniqueName: \"kubernetes.io/projected/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-kube-api-access-mmlcc\") pod \"nova-cell0-conductor-0\" (UID: \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.702320 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.702441 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.708326 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.708592 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.724201 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmlcc\" (UniqueName: \"kubernetes.io/projected/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-kube-api-access-mmlcc\") pod \"nova-cell0-conductor-0\" (UID: \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:40 crc kubenswrapper[4820]: I0203 12:34:40.836085 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:41 crc kubenswrapper[4820]: I0203 12:34:41.354192 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 12:34:42 crc kubenswrapper[4820]: I0203 12:34:42.344556 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9b7dd94a-12cb-4fe7-9ad2-076e72274d90","Type":"ContainerStarted","Data":"8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81"} Feb 03 12:34:42 crc kubenswrapper[4820]: I0203 12:34:42.345421 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9b7dd94a-12cb-4fe7-9ad2-076e72274d90","Type":"ContainerStarted","Data":"322916f95a1e2fc6b1d29d8ef736dad1ff9d269093e7599e4ea4b15ca899f602"} Feb 03 12:34:42 crc kubenswrapper[4820]: I0203 12:34:42.345558 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:42 crc kubenswrapper[4820]: I0203 12:34:42.376566 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.376535576 podStartE2EDuration="2.376535576s" podCreationTimestamp="2026-02-03 12:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:34:42.365089125 +0000 UTC m=+1799.888164989" watchObservedRunningTime="2026-02-03 12:34:42.376535576 +0000 UTC m=+1799.899611440" Feb 03 12:34:43 crc kubenswrapper[4820]: I0203 12:34:43.367504 4820 generic.go:334] "Generic (PLEG): container finished" podID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerID="0f246c95365efface890ae91ed527bfb457eda5dc7b27fa8cda294f51a37e248" exitCode=137 Feb 03 12:34:43 crc kubenswrapper[4820]: I0203 12:34:43.372215 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdc8588b4-jtjr8" event={"ID":"17c371f7-f032-4444-8d4b-1183a224c7b0","Type":"ContainerDied","Data":"0f246c95365efface890ae91ed527bfb457eda5dc7b27fa8cda294f51a37e248"} Feb 03 12:34:43 crc kubenswrapper[4820]: I0203 12:34:43.372305 4820 scope.go:117] "RemoveContainer" containerID="258627a9dac8c607d158f7b60718c41b0a56b0d1a371bcf6c8e5e827f34acb59" Feb 03 12:34:44 crc kubenswrapper[4820]: I0203 12:34:44.380535 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdc8588b4-jtjr8" event={"ID":"17c371f7-f032-4444-8d4b-1183a224c7b0","Type":"ContainerStarted","Data":"76995f196246a725064eaf869384b250078a17273f52f37f37f976ac18b1ddc1"} Feb 03 12:34:44 crc kubenswrapper[4820]: I0203 12:34:44.384444 4820 generic.go:334] "Generic (PLEG): container finished" podID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerID="9c7a577e87b3e83c7e349bf9ccd38e1f5613ee686a7353fa8aac276143a6016b" exitCode=137 Feb 03 12:34:44 crc kubenswrapper[4820]: I0203 12:34:44.384482 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b4df5bdd-tdb9h" event={"ID":"308562dd-6078-4c1c-a4e0-c01a60a2d81d","Type":"ContainerDied","Data":"9c7a577e87b3e83c7e349bf9ccd38e1f5613ee686a7353fa8aac276143a6016b"} Feb 03 12:34:44 crc kubenswrapper[4820]: I0203 12:34:44.384507 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b4df5bdd-tdb9h" event={"ID":"308562dd-6078-4c1c-a4e0-c01a60a2d81d","Type":"ContainerStarted","Data":"590da96327763a1ecf6806acf6e0287da04147012217471081285d16fb887d10"} Feb 03 12:34:44 crc kubenswrapper[4820]: I0203 12:34:44.384525 4820 scope.go:117] "RemoveContainer" containerID="6ecd1021da966b26d0ebdc213f4c8379ce99f2bdd3ff3973594574161725d11d" Feb 03 12:34:46 crc kubenswrapper[4820]: E0203 12:34:46.314825 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17c371f7_f032_4444_8d4b_1183a224c7b0.slice/crio-conmon-0f246c95365efface890ae91ed527bfb457eda5dc7b27fa8cda294f51a37e248.scope\": RecentStats: unable to find data in memory cache]" Feb 03 12:34:49 crc kubenswrapper[4820]: I0203 12:34:49.145377 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:34:49 crc kubenswrapper[4820]: E0203 12:34:49.146488 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:34:50 crc kubenswrapper[4820]: I0203 12:34:50.012357 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 12:34:50 crc kubenswrapper[4820]: I0203 12:34:50.012848 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="9b7dd94a-12cb-4fe7-9ad2-076e72274d90" containerName="nova-cell0-conductor-conductor" containerID="cri-o://8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81" gracePeriod=30 Feb 03 12:34:50 crc kubenswrapper[4820]: I0203 12:34:50.049104 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:50 crc kubenswrapper[4820]: E0203 12:34:50.838606 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 12:34:50 crc kubenswrapper[4820]: E0203 12:34:50.840909 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 12:34:50 crc kubenswrapper[4820]: E0203 12:34:50.842546 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 12:34:50 crc kubenswrapper[4820]: E0203 12:34:50.842648 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="9b7dd94a-12cb-4fe7-9ad2-076e72274d90" containerName="nova-cell0-conductor-conductor" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.736752 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qmdlk"] Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.751464 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.755044 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.758076 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.759930 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qmdlk"] Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.809834 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qmdlk\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.809996 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-scripts\") pod \"nova-cell0-cell-mapping-qmdlk\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.810145 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-config-data\") pod \"nova-cell0-cell-mapping-qmdlk\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.810223 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw5xd\" (UniqueName: \"kubernetes.io/projected/96e51574-4c0f-449e-99c9-f71651ddf08e-kube-api-access-dw5xd\") pod \"nova-cell0-cell-mapping-qmdlk\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.915669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qmdlk\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.915730 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-scripts\") pod \"nova-cell0-cell-mapping-qmdlk\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.915809 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-config-data\") pod \"nova-cell0-cell-mapping-qmdlk\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.915847 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw5xd\" (UniqueName: \"kubernetes.io/projected/96e51574-4c0f-449e-99c9-f71651ddf08e-kube-api-access-dw5xd\") pod \"nova-cell0-cell-mapping-qmdlk\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.928982 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-config-data\") pod \"nova-cell0-cell-mapping-qmdlk\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.931832 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-scripts\") pod \"nova-cell0-cell-mapping-qmdlk\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.932554 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qmdlk\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:34:52 crc kubenswrapper[4820]: I0203 12:34:52.960519 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw5xd\" (UniqueName: \"kubernetes.io/projected/96e51574-4c0f-449e-99c9-f71651ddf08e-kube-api-access-dw5xd\") pod \"nova-cell0-cell-mapping-qmdlk\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.080086 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.087974 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.116145 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.128142 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.128838 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.128879 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.134108 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.245627 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068081be-0cae-4b93-a5a4-cefe01fe6396-logs\") pod \"nova-api-0\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " pod="openstack/nova-api-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.246148 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068081be-0cae-4b93-a5a4-cefe01fe6396-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " pod="openstack/nova-api-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.246234 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvz7m\" (UniqueName: \"kubernetes.io/projected/068081be-0cae-4b93-a5a4-cefe01fe6396-kube-api-access-jvz7m\") pod \"nova-api-0\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " pod="openstack/nova-api-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.246273 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068081be-0cae-4b93-a5a4-cefe01fe6396-config-data\") pod \"nova-api-0\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " pod="openstack/nova-api-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.330198 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.341961 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.344201 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.350719 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068081be-0cae-4b93-a5a4-cefe01fe6396-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " pod="openstack/nova-api-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.355739 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvz7m\" (UniqueName: \"kubernetes.io/projected/068081be-0cae-4b93-a5a4-cefe01fe6396-kube-api-access-jvz7m\") pod \"nova-api-0\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " pod="openstack/nova-api-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.355925 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068081be-0cae-4b93-a5a4-cefe01fe6396-config-data\") pod \"nova-api-0\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " pod="openstack/nova-api-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.356317 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068081be-0cae-4b93-a5a4-cefe01fe6396-logs\") pod \"nova-api-0\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " pod="openstack/nova-api-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.356804 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068081be-0cae-4b93-a5a4-cefe01fe6396-logs\") pod \"nova-api-0\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " pod="openstack/nova-api-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.362735 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.374944 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068081be-0cae-4b93-a5a4-cefe01fe6396-config-data\") pod \"nova-api-0\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " pod="openstack/nova-api-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.377443 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068081be-0cae-4b93-a5a4-cefe01fe6396-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " pod="openstack/nova-api-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.377701 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.420293 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.422411 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.445463 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.458440 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc74c11f-b384-4945-92eb-82e0ea1d63f6-config-data\") pod \"nova-metadata-0\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " pod="openstack/nova-metadata-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.458702 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc74c11f-b384-4945-92eb-82e0ea1d63f6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " pod="openstack/nova-metadata-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.458778 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc74c11f-b384-4945-92eb-82e0ea1d63f6-logs\") pod \"nova-metadata-0\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " pod="openstack/nova-metadata-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.458804 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr4bp\" (UniqueName: \"kubernetes.io/projected/cc74c11f-b384-4945-92eb-82e0ea1d63f6-kube-api-access-hr4bp\") pod \"nova-metadata-0\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " pod="openstack/nova-metadata-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.483364 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvz7m\" (UniqueName: \"kubernetes.io/projected/068081be-0cae-4b93-a5a4-cefe01fe6396-kube-api-access-jvz7m\") pod \"nova-api-0\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " pod="openstack/nova-api-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.514828 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.554978 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-mnlwd"] Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.557606 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.572175 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35400f32-654e-47c4-8fbc-c802522c7c76-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"35400f32-654e-47c4-8fbc-c802522c7c76\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.572269 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc74c11f-b384-4945-92eb-82e0ea1d63f6-config-data\") pod \"nova-metadata-0\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " pod="openstack/nova-metadata-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.572366 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw7wq\" (UniqueName: \"kubernetes.io/projected/35400f32-654e-47c4-8fbc-c802522c7c76-kube-api-access-jw7wq\") pod \"nova-cell1-novncproxy-0\" (UID: \"35400f32-654e-47c4-8fbc-c802522c7c76\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.572401 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35400f32-654e-47c4-8fbc-c802522c7c76-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"35400f32-654e-47c4-8fbc-c802522c7c76\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.572421 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc74c11f-b384-4945-92eb-82e0ea1d63f6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " pod="openstack/nova-metadata-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.572470 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc74c11f-b384-4945-92eb-82e0ea1d63f6-logs\") pod \"nova-metadata-0\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " pod="openstack/nova-metadata-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.572489 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr4bp\" (UniqueName: \"kubernetes.io/projected/cc74c11f-b384-4945-92eb-82e0ea1d63f6-kube-api-access-hr4bp\") pod \"nova-metadata-0\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " pod="openstack/nova-metadata-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.577343 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc74c11f-b384-4945-92eb-82e0ea1d63f6-logs\") pod \"nova-metadata-0\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " pod="openstack/nova-metadata-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.582632 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc74c11f-b384-4945-92eb-82e0ea1d63f6-config-data\") pod \"nova-metadata-0\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " pod="openstack/nova-metadata-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.585818 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc74c11f-b384-4945-92eb-82e0ea1d63f6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " pod="openstack/nova-metadata-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.610947 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-mnlwd"] Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.611036 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr4bp\" (UniqueName: \"kubernetes.io/projected/cc74c11f-b384-4945-92eb-82e0ea1d63f6-kube-api-access-hr4bp\") pod \"nova-metadata-0\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " pod="openstack/nova-metadata-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.616768 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.621274 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.622209 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.633529 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.635516 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.639312 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.678305 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35400f32-654e-47c4-8fbc-c802522c7c76-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"35400f32-654e-47c4-8fbc-c802522c7c76\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.678461 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.678518 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-dns-svc\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.678564 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqzc\" (UniqueName: \"kubernetes.io/projected/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-kube-api-access-xkqzc\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.678585 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-config\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.678671 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw7wq\" (UniqueName: \"kubernetes.io/projected/35400f32-654e-47c4-8fbc-c802522c7c76-kube-api-access-jw7wq\") pod \"nova-cell1-novncproxy-0\" (UID: \"35400f32-654e-47c4-8fbc-c802522c7c76\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.678724 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35400f32-654e-47c4-8fbc-c802522c7c76-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"35400f32-654e-47c4-8fbc-c802522c7c76\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.678814 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.678840 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.683163 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35400f32-654e-47c4-8fbc-c802522c7c76-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"35400f32-654e-47c4-8fbc-c802522c7c76\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.683991 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.685835 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35400f32-654e-47c4-8fbc-c802522c7c76-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"35400f32-654e-47c4-8fbc-c802522c7c76\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.714770 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw7wq\" (UniqueName: \"kubernetes.io/projected/35400f32-654e-47c4-8fbc-c802522c7c76-kube-api-access-jw7wq\") pod \"nova-cell1-novncproxy-0\" (UID: \"35400f32-654e-47c4-8fbc-c802522c7c76\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.780966 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-config\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.789134 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeca15ca-fac4-4279-b44b-8929705a4dfb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eeca15ca-fac4-4279-b44b-8929705a4dfb\") " pod="openstack/nova-scheduler-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.789272 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.789321 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.789724 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.789815 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-dns-svc\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.789863 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeca15ca-fac4-4279-b44b-8929705a4dfb-config-data\") pod \"nova-scheduler-0\" (UID: \"eeca15ca-fac4-4279-b44b-8929705a4dfb\") " pod="openstack/nova-scheduler-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.789957 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbqnm\" (UniqueName: \"kubernetes.io/projected/eeca15ca-fac4-4279-b44b-8929705a4dfb-kube-api-access-tbqnm\") pod \"nova-scheduler-0\" (UID: \"eeca15ca-fac4-4279-b44b-8929705a4dfb\") " pod="openstack/nova-scheduler-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.789995 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkqzc\" (UniqueName: \"kubernetes.io/projected/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-kube-api-access-xkqzc\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.788777 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-config\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.798293 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.800621 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.800628 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.800875 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-dns-svc\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.811459 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.820734 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.826968 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkqzc\" (UniqueName: \"kubernetes.io/projected/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-kube-api-access-xkqzc\") pod \"dnsmasq-dns-757b4f8459-mnlwd\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.898552 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeca15ca-fac4-4279-b44b-8929705a4dfb-config-data\") pod \"nova-scheduler-0\" (UID: \"eeca15ca-fac4-4279-b44b-8929705a4dfb\") " pod="openstack/nova-scheduler-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.898608 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbqnm\" (UniqueName: \"kubernetes.io/projected/eeca15ca-fac4-4279-b44b-8929705a4dfb-kube-api-access-tbqnm\") pod \"nova-scheduler-0\" (UID: \"eeca15ca-fac4-4279-b44b-8929705a4dfb\") " pod="openstack/nova-scheduler-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.898694 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeca15ca-fac4-4279-b44b-8929705a4dfb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eeca15ca-fac4-4279-b44b-8929705a4dfb\") " pod="openstack/nova-scheduler-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.909082 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.911669 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeca15ca-fac4-4279-b44b-8929705a4dfb-config-data\") pod \"nova-scheduler-0\" (UID: \"eeca15ca-fac4-4279-b44b-8929705a4dfb\") " pod="openstack/nova-scheduler-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.920909 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeca15ca-fac4-4279-b44b-8929705a4dfb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eeca15ca-fac4-4279-b44b-8929705a4dfb\") " pod="openstack/nova-scheduler-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.947930 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbqnm\" (UniqueName: \"kubernetes.io/projected/eeca15ca-fac4-4279-b44b-8929705a4dfb-kube-api-access-tbqnm\") pod \"nova-scheduler-0\" (UID: \"eeca15ca-fac4-4279-b44b-8929705a4dfb\") " pod="openstack/nova-scheduler-0" Feb 03 12:34:53 crc kubenswrapper[4820]: I0203 12:34:53.969498 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:34:54 crc kubenswrapper[4820]: I0203 12:34:54.673922 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qmdlk"] Feb 03 12:34:55 crc kubenswrapper[4820]: I0203 12:34:55.965213 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbsmw container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: i/o timeout" start-of-body= Feb 03 12:34:55 crc kubenswrapper[4820]: I0203 12:34:55.965777 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbsmw" podUID="c93c42c7-c9ff-42cc-b604-e36f7a063fcf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: i/o timeout" Feb 03 12:34:56 crc kubenswrapper[4820]: E0203 12:34:56.002272 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81 is running failed: container process not found" containerID="8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 12:34:56 crc kubenswrapper[4820]: E0203 12:34:56.004098 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81 is running failed: container process not found" containerID="8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 12:34:56 crc kubenswrapper[4820]: E0203 12:34:56.009403 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81 is running failed: container process not found" containerID="8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 12:34:56 crc kubenswrapper[4820]: E0203 12:34:56.009469 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="9b7dd94a-12cb-4fe7-9ad2-076e72274d90" containerName="nova-cell0-conductor-conductor" Feb 03 12:34:56 crc kubenswrapper[4820]: I0203 12:34:56.011437 4820 patch_prober.go:28] interesting pod/route-controller-manager-9b8956944-vw228 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:34:56 crc kubenswrapper[4820]: I0203 12:34:56.011501 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-9b8956944-vw228" podUID="35cf07e8-baa5-46c0-9226-22bdbcb2f569" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:34:56 crc kubenswrapper[4820]: I0203 12:34:56.241119 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:34:56 crc kubenswrapper[4820]: I0203 12:34:56.241166 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qmdlk" event={"ID":"96e51574-4c0f-449e-99c9-f71651ddf08e","Type":"ContainerStarted","Data":"734a15fc02924f45f95d9da96b2317d1b39ac507d76ee94eb0d52e6a7d330883"} Feb 03 12:34:56 crc kubenswrapper[4820]: I0203 12:34:56.523031 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-mnlwd"] Feb 03 12:34:58 crc kubenswrapper[4820]: W0203 12:34:56.575175 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9627e225_fd7c_4d6c_bcf1_0434bfb15d22.slice/crio-e0251408713a3bee30fe6b97684f56007bcba7e0395d139db4f8edff75e29c4d WatchSource:0}: Error finding container e0251408713a3bee30fe6b97684f56007bcba7e0395d139db4f8edff75e29c4d: Status 404 returned error can't find the container with id e0251408713a3bee30fe6b97684f56007bcba7e0395d139db4f8edff75e29c4d Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:56.607442 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:56.624024 4820 generic.go:334] "Generic (PLEG): container finished" podID="9b7dd94a-12cb-4fe7-9ad2-076e72274d90" containerID="8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81" exitCode=0 Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:56.624222 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9b7dd94a-12cb-4fe7-9ad2-076e72274d90","Type":"ContainerDied","Data":"8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81"} Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:56.626437 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"068081be-0cae-4b93-a5a4-cefe01fe6396","Type":"ContainerStarted","Data":"3dfd7a660a987a0f508332a2eeb77a03adf50d545d723e0bd7714f20945ae2bc"} Feb 03 12:34:58 crc kubenswrapper[4820]: W0203 12:34:56.699608 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc74c11f_b384_4945_92eb_82e0ea1d63f6.slice/crio-9ac0eb72521ce1b282d2fc56682b5530ac78c116f9de1054778f3bd88aa9255a WatchSource:0}: Error finding container 9ac0eb72521ce1b282d2fc56682b5530ac78c116f9de1054778f3bd88aa9255a: Status 404 returned error can't find the container with id 9ac0eb72521ce1b282d2fc56682b5530ac78c116f9de1054778f3bd88aa9255a Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:56.796581 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:56.823235 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:34:58 crc kubenswrapper[4820]: E0203 12:34:57.177093 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17c371f7_f032_4444_8d4b_1183a224c7b0.slice/crio-conmon-0f246c95365efface890ae91ed527bfb457eda5dc7b27fa8cda294f51a37e248.scope\": RecentStats: unable to find data in memory cache]" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.389671 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eeca15ca-fac4-4279-b44b-8929705a4dfb","Type":"ContainerStarted","Data":"3545696a53140af6dd697adf873d00f4d81ad24c64d8965cd987c7b170810d6a"} Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.398457 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc74c11f-b384-4945-92eb-82e0ea1d63f6","Type":"ContainerStarted","Data":"9ac0eb72521ce1b282d2fc56682b5530ac78c116f9de1054778f3bd88aa9255a"} Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.409533 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"35400f32-654e-47c4-8fbc-c802522c7c76","Type":"ContainerStarted","Data":"5ff263fb0f93546748c0bd21bb0c9bae0575d3e3ec52a70070ae228bc1197825"} Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.472577 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" event={"ID":"9627e225-fd7c-4d6c-bcf1-0434bfb15d22","Type":"ContainerStarted","Data":"d3758e50d1f73ba436300e3091bc9d12790c408d343dc696e65a5675af65f800"} Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.472637 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" event={"ID":"9627e225-fd7c-4d6c-bcf1-0434bfb15d22","Type":"ContainerStarted","Data":"e0251408713a3bee30fe6b97684f56007bcba7e0395d139db4f8edff75e29c4d"} Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.496461 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qmdlk" event={"ID":"96e51574-4c0f-449e-99c9-f71651ddf08e","Type":"ContainerStarted","Data":"cab93bf98dd5daffd2433ee582a8708b834dbb042d829efa03eac43dcfc4f65e"} Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.584325 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gcx8s"] Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.585566 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qmdlk" podStartSLOduration=6.58555138 podStartE2EDuration="6.58555138s" podCreationTimestamp="2026-02-03 12:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:34:58.575762595 +0000 UTC m=+1816.098838459" watchObservedRunningTime="2026-02-03 12:34:58.58555138 +0000 UTC m=+1816.108627234" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.586659 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.596272 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.596586 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.620100 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm7xc\" (UniqueName: \"kubernetes.io/projected/03344b7f-772a-4f59-9955-99a923bd9fee-kube-api-access-rm7xc\") pod \"nova-cell1-conductor-db-sync-gcx8s\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.620394 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gcx8s\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.620435 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-config-data\") pod \"nova-cell1-conductor-db-sync-gcx8s\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.620456 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-scripts\") pod \"nova-cell1-conductor-db-sync-gcx8s\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.623355 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gcx8s"] Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.722359 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm7xc\" (UniqueName: \"kubernetes.io/projected/03344b7f-772a-4f59-9955-99a923bd9fee-kube-api-access-rm7xc\") pod \"nova-cell1-conductor-db-sync-gcx8s\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.722900 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gcx8s\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.722938 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-config-data\") pod \"nova-cell1-conductor-db-sync-gcx8s\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.722955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-scripts\") pod \"nova-cell1-conductor-db-sync-gcx8s\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.730356 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-config-data\") pod \"nova-cell1-conductor-db-sync-gcx8s\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.733080 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-scripts\") pod \"nova-cell1-conductor-db-sync-gcx8s\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.735880 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gcx8s\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.779012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm7xc\" (UniqueName: \"kubernetes.io/projected/03344b7f-772a-4f59-9955-99a923bd9fee-kube-api-access-rm7xc\") pod \"nova-cell1-conductor-db-sync-gcx8s\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.860126 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:34:58 crc kubenswrapper[4820]: I0203 12:34:58.958934 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.139706 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmlcc\" (UniqueName: \"kubernetes.io/projected/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-kube-api-access-mmlcc\") pod \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\" (UID: \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\") " Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.140701 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-combined-ca-bundle\") pod \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\" (UID: \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\") " Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.143672 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-config-data\") pod \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\" (UID: \"9b7dd94a-12cb-4fe7-9ad2-076e72274d90\") " Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.160419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-kube-api-access-mmlcc" (OuterVolumeSpecName: "kube-api-access-mmlcc") pod "9b7dd94a-12cb-4fe7-9ad2-076e72274d90" (UID: "9b7dd94a-12cb-4fe7-9ad2-076e72274d90"). InnerVolumeSpecName "kube-api-access-mmlcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.221148 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b7dd94a-12cb-4fe7-9ad2-076e72274d90" (UID: "9b7dd94a-12cb-4fe7-9ad2-076e72274d90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.237265 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-config-data" (OuterVolumeSpecName: "config-data") pod "9b7dd94a-12cb-4fe7-9ad2-076e72274d90" (UID: "9b7dd94a-12cb-4fe7-9ad2-076e72274d90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.256190 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.256234 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmlcc\" (UniqueName: \"kubernetes.io/projected/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-kube-api-access-mmlcc\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.256253 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7dd94a-12cb-4fe7-9ad2-076e72274d90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.666855 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9b7dd94a-12cb-4fe7-9ad2-076e72274d90","Type":"ContainerDied","Data":"322916f95a1e2fc6b1d29d8ef736dad1ff9d269093e7599e4ea4b15ca899f602"} Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.679921 4820 scope.go:117] "RemoveContainer" containerID="8512128a86b7d758c7d69e58967a226b69e077291d9047dbd3a2e7b75fb3fd81" Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.668000 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.735520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" event={"ID":"9627e225-fd7c-4d6c-bcf1-0434bfb15d22","Type":"ContainerDied","Data":"d3758e50d1f73ba436300e3091bc9d12790c408d343dc696e65a5675af65f800"} Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.735619 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.728871 4820 generic.go:334] "Generic (PLEG): container finished" podID="9627e225-fd7c-4d6c-bcf1-0434bfb15d22" containerID="d3758e50d1f73ba436300e3091bc9d12790c408d343dc696e65a5675af65f800" exitCode=0 Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.766006 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" event={"ID":"9627e225-fd7c-4d6c-bcf1-0434bfb15d22","Type":"ContainerStarted","Data":"185ae363e0c2ef42e23e14dcac2896841679b18153b96a6e0ea1ecd99f11d620"} Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.899594 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" podStartSLOduration=6.899560193 podStartE2EDuration="6.899560193s" podCreationTimestamp="2026-02-03 12:34:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:34:59.860489494 +0000 UTC m=+1817.383565358" watchObservedRunningTime="2026-02-03 12:34:59.899560193 +0000 UTC m=+1817.422636137" Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.965372 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.985386 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 12:34:59 crc kubenswrapper[4820]: I0203 12:34:59.999807 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gcx8s"] Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.012580 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 12:35:00 crc kubenswrapper[4820]: E0203 12:35:00.013204 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7dd94a-12cb-4fe7-9ad2-076e72274d90" containerName="nova-cell0-conductor-conductor" Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.013230 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7dd94a-12cb-4fe7-9ad2-076e72274d90" containerName="nova-cell0-conductor-conductor" Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.013546 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7dd94a-12cb-4fe7-9ad2-076e72274d90" containerName="nova-cell0-conductor-conductor" Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.015230 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.028767 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.029052 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.075771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvfmz\" (UniqueName: \"kubernetes.io/projected/d1bc719a-a75c-4bf1-aaae-0e89d1ed34db-kube-api-access-cvfmz\") pod \"nova-cell0-conductor-0\" (UID: \"d1bc719a-a75c-4bf1-aaae-0e89d1ed34db\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.090476 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1bc719a-a75c-4bf1-aaae-0e89d1ed34db-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d1bc719a-a75c-4bf1-aaae-0e89d1ed34db\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.090801 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1bc719a-a75c-4bf1-aaae-0e89d1ed34db-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d1bc719a-a75c-4bf1-aaae-0e89d1ed34db\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.544133 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1bc719a-a75c-4bf1-aaae-0e89d1ed34db-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d1bc719a-a75c-4bf1-aaae-0e89d1ed34db\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.544966 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1bc719a-a75c-4bf1-aaae-0e89d1ed34db-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d1bc719a-a75c-4bf1-aaae-0e89d1ed34db\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.545129 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvfmz\" (UniqueName: \"kubernetes.io/projected/d1bc719a-a75c-4bf1-aaae-0e89d1ed34db-kube-api-access-cvfmz\") pod \"nova-cell0-conductor-0\" (UID: \"d1bc719a-a75c-4bf1-aaae-0e89d1ed34db\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.602189 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1bc719a-a75c-4bf1-aaae-0e89d1ed34db-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d1bc719a-a75c-4bf1-aaae-0e89d1ed34db\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.633454 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1bc719a-a75c-4bf1-aaae-0e89d1ed34db-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d1bc719a-a75c-4bf1-aaae-0e89d1ed34db\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.678952 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvfmz\" (UniqueName: \"kubernetes.io/projected/d1bc719a-a75c-4bf1-aaae-0e89d1ed34db-kube-api-access-cvfmz\") pod \"nova-cell0-conductor-0\" (UID: \"d1bc719a-a75c-4bf1-aaae-0e89d1ed34db\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.797086 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gcx8s" event={"ID":"03344b7f-772a-4f59-9955-99a923bd9fee","Type":"ContainerStarted","Data":"14a3c274a1fdf9682c22e5d93fa9d7808dcca36e375afabec3d3de8cfa7bc356"} Feb 03 12:35:00 crc kubenswrapper[4820]: I0203 12:35:00.978838 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 12:35:01 crc kubenswrapper[4820]: I0203 12:35:01.491086 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b7dd94a-12cb-4fe7-9ad2-076e72274d90" path="/var/lib/kubelet/pods/9b7dd94a-12cb-4fe7-9ad2-076e72274d90/volumes" Feb 03 12:35:01 crc kubenswrapper[4820]: I0203 12:35:01.841289 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gcx8s" event={"ID":"03344b7f-772a-4f59-9955-99a923bd9fee","Type":"ContainerStarted","Data":"17517eaf1b7daae15a9f186aa6d51c7fa4ac86a2ede0b331062db143c586a3f3"} Feb 03 12:35:01 crc kubenswrapper[4820]: I0203 12:35:01.887796 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-gcx8s" podStartSLOduration=3.887761933 podStartE2EDuration="3.887761933s" podCreationTimestamp="2026-02-03 12:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:35:01.872785826 +0000 UTC m=+1819.395861720" watchObservedRunningTime="2026-02-03 12:35:01.887761933 +0000 UTC m=+1819.410837797" Feb 03 12:35:02 crc kubenswrapper[4820]: I0203 12:35:02.650629 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 12:35:02 crc kubenswrapper[4820]: I0203 12:35:02.918974 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:35:02 crc kubenswrapper[4820]: I0203 12:35:02.919320 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="ceilometer-central-agent" containerID="cri-o://465ff03b2ca70fbdd3d10d95bd4c4be128cc1c465b3b4a0fad006b81b4bd36be" gracePeriod=30 Feb 03 12:35:02 crc kubenswrapper[4820]: I0203 12:35:02.920298 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="proxy-httpd" containerID="cri-o://2593d50af9746ad6d6d1a970a01c1509755c275f1991b4dec341cd3a990e342f" gracePeriod=30 Feb 03 12:35:02 crc kubenswrapper[4820]: I0203 12:35:02.920362 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="sg-core" containerID="cri-o://f4fc507cd388efa2a49573fdcbfa7bf757e12ec2c473b2be28ae886e813ba750" gracePeriod=30 Feb 03 12:35:02 crc kubenswrapper[4820]: I0203 12:35:02.920395 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="ceilometer-notification-agent" containerID="cri-o://0b1773f7ae32ea07b74ba0043ee70a6621c9a349fe4ccc6d5db6db768e0be7fb" gracePeriod=30 Feb 03 12:35:02 crc kubenswrapper[4820]: I0203 12:35:02.970059 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 12:35:03 crc kubenswrapper[4820]: I0203 12:35:03.494462 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:35:03 crc kubenswrapper[4820]: E0203 12:35:03.495022 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:35:03 crc kubenswrapper[4820]: I0203 12:35:03.549491 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:35:03 crc kubenswrapper[4820]: I0203 12:35:03.631613 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 03 12:35:04 crc kubenswrapper[4820]: I0203 12:35:04.281354 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:35:04 crc kubenswrapper[4820]: I0203 12:35:04.365552 4820 generic.go:334] "Generic (PLEG): container finished" podID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerID="f4fc507cd388efa2a49573fdcbfa7bf757e12ec2c473b2be28ae886e813ba750" exitCode=2 Feb 03 12:35:04 crc kubenswrapper[4820]: I0203 12:35:04.365630 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2","Type":"ContainerDied","Data":"f4fc507cd388efa2a49573fdcbfa7bf757e12ec2c473b2be28ae886e813ba750"} Feb 03 12:35:04 crc kubenswrapper[4820]: I0203 12:35:04.440751 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.209:3000/\": read tcp 10.217.0.2:34004->10.217.0.209:3000: read: connection reset by peer" Feb 03 12:35:04 crc kubenswrapper[4820]: I0203 12:35:04.993963 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-sfhnq"] Feb 03 12:35:04 crc kubenswrapper[4820]: I0203 12:35:04.994952 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" podUID="0e9efb73-1fc6-4e04-9b3c-89226c1d717c" containerName="dnsmasq-dns" containerID="cri-o://c3c54b645028b903c154b8fd418e95de43ab9aa46fb7314f0f3decedd34600c4" gracePeriod=10 Feb 03 12:35:05 crc kubenswrapper[4820]: I0203 12:35:05.637360 4820 generic.go:334] "Generic (PLEG): container finished" podID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerID="2593d50af9746ad6d6d1a970a01c1509755c275f1991b4dec341cd3a990e342f" exitCode=0 Feb 03 12:35:05 crc kubenswrapper[4820]: I0203 12:35:05.637508 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2","Type":"ContainerDied","Data":"2593d50af9746ad6d6d1a970a01c1509755c275f1991b4dec341cd3a990e342f"} Feb 03 12:35:05 crc kubenswrapper[4820]: I0203 12:35:05.665996 4820 generic.go:334] "Generic (PLEG): container finished" podID="0e9efb73-1fc6-4e04-9b3c-89226c1d717c" containerID="c3c54b645028b903c154b8fd418e95de43ab9aa46fb7314f0f3decedd34600c4" exitCode=0 Feb 03 12:35:05 crc kubenswrapper[4820]: I0203 12:35:05.666052 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" event={"ID":"0e9efb73-1fc6-4e04-9b3c-89226c1d717c","Type":"ContainerDied","Data":"c3c54b645028b903c154b8fd418e95de43ab9aa46fb7314f0f3decedd34600c4"} Feb 03 12:35:06 crc kubenswrapper[4820]: I0203 12:35:06.694012 4820 generic.go:334] "Generic (PLEG): container finished" podID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerID="0b1773f7ae32ea07b74ba0043ee70a6621c9a349fe4ccc6d5db6db768e0be7fb" exitCode=0 Feb 03 12:35:06 crc kubenswrapper[4820]: I0203 12:35:06.694364 4820 generic.go:334] "Generic (PLEG): container finished" podID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerID="465ff03b2ca70fbdd3d10d95bd4c4be128cc1c465b3b4a0fad006b81b4bd36be" exitCode=0 Feb 03 12:35:06 crc kubenswrapper[4820]: I0203 12:35:06.694255 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2","Type":"ContainerDied","Data":"0b1773f7ae32ea07b74ba0043ee70a6621c9a349fe4ccc6d5db6db768e0be7fb"} Feb 03 12:35:06 crc kubenswrapper[4820]: I0203 12:35:06.694428 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2","Type":"ContainerDied","Data":"465ff03b2ca70fbdd3d10d95bd4c4be128cc1c465b3b4a0fad006b81b4bd36be"} Feb 03 12:35:07 crc kubenswrapper[4820]: I0203 12:35:07.819185 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d1bc719a-a75c-4bf1-aaae-0e89d1ed34db","Type":"ContainerStarted","Data":"782a65c917b6d6d671bf36563993b5a83b0ab65af87f032ee40797de575bdf14"} Feb 03 12:35:08 crc kubenswrapper[4820]: I0203 12:35:08.189102 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" podUID="0e9efb73-1fc6-4e04-9b3c-89226c1d717c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: connect: connection refused" Feb 03 12:35:08 crc kubenswrapper[4820]: E0203 12:35:08.242140 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17c371f7_f032_4444_8d4b_1183a224c7b0.slice/crio-conmon-0f246c95365efface890ae91ed527bfb457eda5dc7b27fa8cda294f51a37e248.scope\": RecentStats: unable to find data in memory cache]" Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.049527 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.160763 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.211598 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-config\") pod \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.211707 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp5qn\" (UniqueName: \"kubernetes.io/projected/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-kube-api-access-vp5qn\") pod \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.211743 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-ovsdbserver-nb\") pod \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.211802 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-dns-swift-storage-0\") pod \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.211917 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-dns-svc\") pod \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.212057 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-ovsdbserver-sb\") pod \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\" (UID: \"0e9efb73-1fc6-4e04-9b3c-89226c1d717c\") " Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.233442 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-kube-api-access-vp5qn" (OuterVolumeSpecName: "kube-api-access-vp5qn") pod "0e9efb73-1fc6-4e04-9b3c-89226c1d717c" (UID: "0e9efb73-1fc6-4e04-9b3c-89226c1d717c"). InnerVolumeSpecName "kube-api-access-vp5qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.273317 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.274556 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"76995f196246a725064eaf869384b250078a17273f52f37f37f976ac18b1ddc1"} pod="openstack/horizon-5fdc8588b4-jtjr8" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.274668 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" containerID="cri-o://76995f196246a725064eaf869384b250078a17273f52f37f37f976ac18b1ddc1" gracePeriod=30 Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.349803 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp5qn\" (UniqueName: \"kubernetes.io/projected/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-kube-api-access-vp5qn\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.352132 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0e9efb73-1fc6-4e04-9b3c-89226c1d717c" (UID: "0e9efb73-1fc6-4e04-9b3c-89226c1d717c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.374157 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" event={"ID":"0e9efb73-1fc6-4e04-9b3c-89226c1d717c","Type":"ContainerDied","Data":"fbef18f8d1c741bf0c91268131e3adbc2d701f2a94ffea604f5c426830196486"} Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.374236 4820 scope.go:117] "RemoveContainer" containerID="c3c54b645028b903c154b8fd418e95de43ab9aa46fb7314f0f3decedd34600c4" Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.374475 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-sfhnq" Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.451610 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.466092 4820 scope.go:117] "RemoveContainer" containerID="06416610bcd6f6e133e06456fbc64e9840bb2b5e012fe6593123ad78d0bef8ba" Feb 03 12:35:13 crc kubenswrapper[4820]: I0203 12:35:13.829461 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.064564 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0e9efb73-1fc6-4e04-9b3c-89226c1d717c" (UID: "0e9efb73-1fc6-4e04-9b3c-89226c1d717c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.094162 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-config" (OuterVolumeSpecName: "config") pod "0e9efb73-1fc6-4e04-9b3c-89226c1d717c" (UID: "0e9efb73-1fc6-4e04-9b3c-89226c1d717c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.117221 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0e9efb73-1fc6-4e04-9b3c-89226c1d717c" (UID: "0e9efb73-1fc6-4e04-9b3c-89226c1d717c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.117621 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e9efb73-1fc6-4e04-9b3c-89226c1d717c" (UID: "0e9efb73-1fc6-4e04-9b3c-89226c1d717c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.127673 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.127705 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.127715 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.127727 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e9efb73-1fc6-4e04-9b3c-89226c1d717c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.391780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d1bc719a-a75c-4bf1-aaae-0e89d1ed34db","Type":"ContainerStarted","Data":"32c0415cb6e59be13397c5a635c493c09cb6a6600029d32e984fa01d91107ead"} Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.392508 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.394768 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"068081be-0cae-4b93-a5a4-cefe01fe6396","Type":"ContainerStarted","Data":"e524e9a6b7132d40dc2d2e7e05e31e1943d38d1dc9eaf46dfd210b4b6ec05c7b"} Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.398375 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2","Type":"ContainerDied","Data":"a0241b6d36a6ac3ccf0539f8f6d159f735c2e53f56eafbb9c377960b26e8be3d"} Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.398825 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0241b6d36a6ac3ccf0539f8f6d159f735c2e53f56eafbb9c377960b26e8be3d" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.401710 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"35400f32-654e-47c4-8fbc-c802522c7c76","Type":"ContainerStarted","Data":"eb53cd7b331e90344f147e4521c9a2ad1f7202f91868835c56206b7e57ddfb38"} Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.424608 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=15.424557333 podStartE2EDuration="15.424557333s" podCreationTimestamp="2026-02-03 12:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:35:14.414528951 +0000 UTC m=+1831.937604835" watchObservedRunningTime="2026-02-03 12:35:14.424557333 +0000 UTC m=+1831.947633187" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.446848 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.461026 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=5.320300893 podStartE2EDuration="21.46100049s" podCreationTimestamp="2026-02-03 12:34:53 +0000 UTC" firstStartedPulling="2026-02-03 12:34:56.921778941 +0000 UTC m=+1814.444854805" lastFinishedPulling="2026-02-03 12:35:13.062478538 +0000 UTC m=+1830.585554402" observedRunningTime="2026-02-03 12:35:14.439170398 +0000 UTC m=+1831.962246272" watchObservedRunningTime="2026-02-03 12:35:14.46100049 +0000 UTC m=+1831.984076354" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.780983 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-sfhnq"] Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.800919 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-sfhnq"] Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.847084 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-config-data\") pod \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.847163 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-log-httpd\") pod \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.847195 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-run-httpd\") pod \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.847285 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-sg-core-conf-yaml\") pod \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.847325 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67s26\" (UniqueName: \"kubernetes.io/projected/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-kube-api-access-67s26\") pod \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.847447 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-combined-ca-bundle\") pod \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.847497 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-scripts\") pod \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\" (UID: \"633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2\") " Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.849906 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" (UID: "633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.854400 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" (UID: "633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.889157 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-kube-api-access-67s26" (OuterVolumeSpecName: "kube-api-access-67s26") pod "633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" (UID: "633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2"). InnerVolumeSpecName "kube-api-access-67s26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.889291 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-scripts" (OuterVolumeSpecName: "scripts") pod "633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" (UID: "633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.957633 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.957693 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.957711 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67s26\" (UniqueName: \"kubernetes.io/projected/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-kube-api-access-67s26\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.957726 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:14 crc kubenswrapper[4820]: I0203 12:35:14.970738 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" (UID: "633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.063062 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.159770 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e9efb73-1fc6-4e04-9b3c-89226c1d717c" path="/var/lib/kubelet/pods/0e9efb73-1fc6-4e04-9b3c-89226c1d717c/volumes" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.199329 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" (UID: "633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.277569 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.283370 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-config-data" (OuterVolumeSpecName: "config-data") pod "633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" (UID: "633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.379904 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.415356 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"068081be-0cae-4b93-a5a4-cefe01fe6396","Type":"ContainerStarted","Data":"6ff3515e211661ceb53889d00019920e29948152b51a89f695ed4b2cbfbcfbe5"} Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.418368 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.419878 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc74c11f-b384-4945-92eb-82e0ea1d63f6","Type":"ContainerStarted","Data":"caaa615093f433b4b43cc4bacf0de51b6a4bfbb75b0e37f79d88436b1f13ccae"} Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.721464 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=7.135425239 podStartE2EDuration="23.721428221s" podCreationTimestamp="2026-02-03 12:34:52 +0000 UTC" firstStartedPulling="2026-02-03 12:34:56.509705307 +0000 UTC m=+1814.032781181" lastFinishedPulling="2026-02-03 12:35:13.095708299 +0000 UTC m=+1830.618784163" observedRunningTime="2026-02-03 12:35:15.688674624 +0000 UTC m=+1833.211750498" watchObservedRunningTime="2026-02-03 12:35:15.721428221 +0000 UTC m=+1833.244504075" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.804376 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.883228 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.929126 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:35:15 crc kubenswrapper[4820]: E0203 12:35:15.929684 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="sg-core" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.929717 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="sg-core" Feb 03 12:35:15 crc kubenswrapper[4820]: E0203 12:35:15.929743 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="ceilometer-central-agent" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.929750 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="ceilometer-central-agent" Feb 03 12:35:15 crc kubenswrapper[4820]: E0203 12:35:15.929762 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="proxy-httpd" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.929770 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="proxy-httpd" Feb 03 12:35:15 crc kubenswrapper[4820]: E0203 12:35:15.929781 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="ceilometer-notification-agent" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.929788 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="ceilometer-notification-agent" Feb 03 12:35:15 crc kubenswrapper[4820]: E0203 12:35:15.929816 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9efb73-1fc6-4e04-9b3c-89226c1d717c" containerName="dnsmasq-dns" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.929824 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9efb73-1fc6-4e04-9b3c-89226c1d717c" containerName="dnsmasq-dns" Feb 03 12:35:15 crc kubenswrapper[4820]: E0203 12:35:15.929836 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9efb73-1fc6-4e04-9b3c-89226c1d717c" containerName="init" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.929842 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9efb73-1fc6-4e04-9b3c-89226c1d717c" containerName="init" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.930082 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="ceilometer-central-agent" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.930102 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9efb73-1fc6-4e04-9b3c-89226c1d717c" containerName="dnsmasq-dns" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.930113 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="sg-core" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.930122 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="proxy-httpd" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.930133 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" containerName="ceilometer-notification-agent" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.940971 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.947545 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.948676 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:35:15 crc kubenswrapper[4820]: I0203 12:35:15.953641 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.081415 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-scripts\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.081551 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm2sm\" (UniqueName: \"kubernetes.io/projected/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-kube-api-access-sm2sm\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.081619 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.081661 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-log-httpd\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.081697 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-config-data\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.081736 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-run-httpd\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.081822 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.142451 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:35:16 crc kubenswrapper[4820]: E0203 12:35:16.142908 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.186552 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-run-httpd\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.186679 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.186748 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-scripts\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.186857 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm2sm\" (UniqueName: \"kubernetes.io/projected/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-kube-api-access-sm2sm\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.186931 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.188285 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-log-httpd\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.188348 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-config-data\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.204758 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-run-httpd\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.205002 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-log-httpd\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.205938 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.211435 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-scripts\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.218008 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-config-data\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.218919 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.288203 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm2sm\" (UniqueName: \"kubernetes.io/projected/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-kube-api-access-sm2sm\") pod \"ceilometer-0\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.668515 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.862859 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e8e46f8a-5de0-457f-b8eb-f76e8902e8ab" containerName="galera" probeResult="failure" output="command timed out" Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.892830 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc74c11f-b384-4945-92eb-82e0ea1d63f6","Type":"ContainerStarted","Data":"307e1758ba74f0436f771d69582cfb98330097f909d12a2f2e1d245c0fb91c0a"} Feb 03 12:35:16 crc kubenswrapper[4820]: I0203 12:35:16.951040 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e8e46f8a-5de0-457f-b8eb-f76e8902e8ab" containerName="galera" probeResult="failure" output="command timed out" Feb 03 12:35:17 crc kubenswrapper[4820]: I0203 12:35:17.179474 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2" path="/var/lib/kubelet/pods/633dafb1-94a5-4d7c-9ffc-03f32e3dc9e2/volumes" Feb 03 12:35:17 crc kubenswrapper[4820]: I0203 12:35:17.568698 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=8.171111755 podStartE2EDuration="24.568672491s" podCreationTimestamp="2026-02-03 12:34:53 +0000 UTC" firstStartedPulling="2026-02-03 12:34:56.70986616 +0000 UTC m=+1814.232942034" lastFinishedPulling="2026-02-03 12:35:13.107426906 +0000 UTC m=+1830.630502770" observedRunningTime="2026-02-03 12:35:16.945403254 +0000 UTC m=+1834.468479128" watchObservedRunningTime="2026-02-03 12:35:17.568672491 +0000 UTC m=+1835.091748375" Feb 03 12:35:17 crc kubenswrapper[4820]: W0203 12:35:17.578924 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1df90b5_ffb2_40d7_9c24_7f90aa8cb1a2.slice/crio-df5928d0729de0f38ca9fa1a78d5ad48a5e884c8178fad374367854f56d17429 WatchSource:0}: Error finding container df5928d0729de0f38ca9fa1a78d5ad48a5e884c8178fad374367854f56d17429: Status 404 returned error can't find the container with id df5928d0729de0f38ca9fa1a78d5ad48a5e884c8178fad374367854f56d17429 Feb 03 12:35:17 crc kubenswrapper[4820]: I0203 12:35:17.579785 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:35:17 crc kubenswrapper[4820]: I0203 12:35:17.595361 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:35:17 crc kubenswrapper[4820]: I0203 12:35:17.905433 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2","Type":"ContainerStarted","Data":"df5928d0729de0f38ca9fa1a78d5ad48a5e884c8178fad374367854f56d17429"} Feb 03 12:35:17 crc kubenswrapper[4820]: I0203 12:35:17.907408 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eeca15ca-fac4-4279-b44b-8929705a4dfb","Type":"ContainerStarted","Data":"6b4fcf19e0d033e1be0fdc8ea435071cfd1953128301d85233af40464f6e362b"} Feb 03 12:35:17 crc kubenswrapper[4820]: I0203 12:35:17.935658 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=8.730510031 podStartE2EDuration="24.935634563s" podCreationTimestamp="2026-02-03 12:34:53 +0000 UTC" firstStartedPulling="2026-02-03 12:34:56.860548372 +0000 UTC m=+1814.383624236" lastFinishedPulling="2026-02-03 12:35:13.065672904 +0000 UTC m=+1830.588748768" observedRunningTime="2026-02-03 12:35:17.929359774 +0000 UTC m=+1835.452435658" watchObservedRunningTime="2026-02-03 12:35:17.935634563 +0000 UTC m=+1835.458710427" Feb 03 12:35:18 crc kubenswrapper[4820]: I0203 12:35:18.813486 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 12:35:18 crc kubenswrapper[4820]: I0203 12:35:18.813840 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 12:35:18 crc kubenswrapper[4820]: I0203 12:35:18.821160 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:18 crc kubenswrapper[4820]: I0203 12:35:18.971021 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2","Type":"ContainerStarted","Data":"7c77932df498b34e70949f89a8806eb41ff4cefb199580c1ab5f36b72e32fdcc"} Feb 03 12:35:18 crc kubenswrapper[4820]: I0203 12:35:18.971113 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 12:35:18 crc kubenswrapper[4820]: E0203 12:35:18.984533 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17c371f7_f032_4444_8d4b_1183a224c7b0.slice/crio-conmon-0f246c95365efface890ae91ed527bfb457eda5dc7b27fa8cda294f51a37e248.scope\": RecentStats: unable to find data in memory cache]" Feb 03 12:35:21 crc kubenswrapper[4820]: I0203 12:35:21.054534 4820 scope.go:117] "RemoveContainer" containerID="eee92b9e627a6f88e77bdbc2740db58043e286daf18a6e2594f5ee9b8e73705f" Feb 03 12:35:21 crc kubenswrapper[4820]: I0203 12:35:21.223403 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 03 12:35:21 crc kubenswrapper[4820]: I0203 12:35:21.240260 4820 scope.go:117] "RemoveContainer" containerID="2f8b0bda5672c4fb02adb6f8a6223f20960d97b21d6cac3eda7f9992132626c9" Feb 03 12:35:21 crc kubenswrapper[4820]: I0203 12:35:21.337153 4820 scope.go:117] "RemoveContainer" containerID="16969cb76071279f5c431eae7ce9d428b441545817c6ab7b04fc9306bfb48d30" Feb 03 12:35:21 crc kubenswrapper[4820]: I0203 12:35:21.355001 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:35:21 crc kubenswrapper[4820]: I0203 12:35:21.355342 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2ae1a10e-b84f-4533-940c-0688f69fae7c" containerName="kube-state-metrics" containerID="cri-o://81d31e72f91a59cefb8639c6016ebb5627711f6180857a366d25f0dddf77f758" gracePeriod=30 Feb 03 12:35:21 crc kubenswrapper[4820]: I0203 12:35:21.407048 4820 scope.go:117] "RemoveContainer" containerID="9ddb0db6be8f029bdec295dfbfd0f3ab899c22be7083edb646ad9e31ab3eb30d" Feb 03 12:35:22 crc kubenswrapper[4820]: I0203 12:35:22.695745 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2","Type":"ContainerStarted","Data":"492702291ee0f8bba54f9ffadd3bf1dc83f97d05b5e361c108fd74a5937b38bc"} Feb 03 12:35:22 crc kubenswrapper[4820]: I0203 12:35:22.701506 4820 scope.go:117] "RemoveContainer" containerID="79f1610b2a1f0aff89134774b059753aa69330068731a2fb3f06b79c922fa21d" Feb 03 12:35:22 crc kubenswrapper[4820]: I0203 12:35:22.714560 4820 generic.go:334] "Generic (PLEG): container finished" podID="2ae1a10e-b84f-4533-940c-0688f69fae7c" containerID="81d31e72f91a59cefb8639c6016ebb5627711f6180857a366d25f0dddf77f758" exitCode=2 Feb 03 12:35:22 crc kubenswrapper[4820]: I0203 12:35:22.714634 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2ae1a10e-b84f-4533-940c-0688f69fae7c","Type":"ContainerDied","Data":"81d31e72f91a59cefb8639c6016ebb5627711f6180857a366d25f0dddf77f758"} Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.177327 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.295030 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdsm6\" (UniqueName: \"kubernetes.io/projected/2ae1a10e-b84f-4533-940c-0688f69fae7c-kube-api-access-xdsm6\") pod \"2ae1a10e-b84f-4533-940c-0688f69fae7c\" (UID: \"2ae1a10e-b84f-4533-940c-0688f69fae7c\") " Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.306916 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae1a10e-b84f-4533-940c-0688f69fae7c-kube-api-access-xdsm6" (OuterVolumeSpecName: "kube-api-access-xdsm6") pod "2ae1a10e-b84f-4533-940c-0688f69fae7c" (UID: "2ae1a10e-b84f-4533-940c-0688f69fae7c"). InnerVolumeSpecName "kube-api-access-xdsm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.413690 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdsm6\" (UniqueName: \"kubernetes.io/projected/2ae1a10e-b84f-4533-940c-0688f69fae7c-kube-api-access-xdsm6\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.621087 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.621384 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.621395 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.628398 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.634184 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.636190 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"590da96327763a1ecf6806acf6e0287da04147012217471081285d16fb887d10"} pod="openstack/horizon-68b4df5bdd-tdb9h" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.658150 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" containerID="cri-o://590da96327763a1ecf6806acf6e0287da04147012217471081285d16fb887d10" gracePeriod=30 Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.649491 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.773289 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.776733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2ae1a10e-b84f-4533-940c-0688f69fae7c","Type":"ContainerDied","Data":"dbf87e01d472aecc62ac1d3e5903bbb025c5c5a06ccc72c52295f86e933a2fb9"} Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.776826 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.776856 4820 scope.go:117] "RemoveContainer" containerID="81d31e72f91a59cefb8639c6016ebb5627711f6180857a366d25f0dddf77f758" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.777709 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cc74c11f-b384-4945-92eb-82e0ea1d63f6" containerName="nova-metadata-log" containerID="cri-o://caaa615093f433b4b43cc4bacf0de51b6a4bfbb75b0e37f79d88436b1f13ccae" gracePeriod=30 Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.777951 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cc74c11f-b384-4945-92eb-82e0ea1d63f6" containerName="nova-metadata-metadata" containerID="cri-o://307e1758ba74f0436f771d69582cfb98330097f909d12a2f2e1d245c0fb91c0a" gracePeriod=30 Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.811185 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.816195 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="35400f32-654e-47c4-8fbc-c802522c7c76" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://eb53cd7b331e90344f147e4521c9a2ad1f7202f91868835c56206b7e57ddfb38" gracePeriod=30 Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.890984 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.922859 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.939833 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:35:23 crc kubenswrapper[4820]: E0203 12:35:23.940446 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae1a10e-b84f-4533-940c-0688f69fae7c" containerName="kube-state-metrics" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.940474 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae1a10e-b84f-4533-940c-0688f69fae7c" containerName="kube-state-metrics" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.940780 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae1a10e-b84f-4533-940c-0688f69fae7c" containerName="kube-state-metrics" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.944994 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.951342 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.951432 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 03 12:35:23 crc kubenswrapper[4820]: I0203 12:35:23.970499 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.007593 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.050968 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6e937f-acf9-4ee8-8ee9-c757535b3a53-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"eb6e937f-acf9-4ee8-8ee9-c757535b3a53\") " pod="openstack/kube-state-metrics-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.051160 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9mjd\" (UniqueName: \"kubernetes.io/projected/eb6e937f-acf9-4ee8-8ee9-c757535b3a53-kube-api-access-k9mjd\") pod \"kube-state-metrics-0\" (UID: \"eb6e937f-acf9-4ee8-8ee9-c757535b3a53\") " pod="openstack/kube-state-metrics-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.051551 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/eb6e937f-acf9-4ee8-8ee9-c757535b3a53-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"eb6e937f-acf9-4ee8-8ee9-c757535b3a53\") " pod="openstack/kube-state-metrics-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.051701 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6e937f-acf9-4ee8-8ee9-c757535b3a53-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"eb6e937f-acf9-4ee8-8ee9-c757535b3a53\") " pod="openstack/kube-state-metrics-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.068032 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.650309 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/eb6e937f-acf9-4ee8-8ee9-c757535b3a53-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"eb6e937f-acf9-4ee8-8ee9-c757535b3a53\") " pod="openstack/kube-state-metrics-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.650427 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6e937f-acf9-4ee8-8ee9-c757535b3a53-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"eb6e937f-acf9-4ee8-8ee9-c757535b3a53\") " pod="openstack/kube-state-metrics-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.650580 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6e937f-acf9-4ee8-8ee9-c757535b3a53-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"eb6e937f-acf9-4ee8-8ee9-c757535b3a53\") " pod="openstack/kube-state-metrics-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.650641 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9mjd\" (UniqueName: \"kubernetes.io/projected/eb6e937f-acf9-4ee8-8ee9-c757535b3a53-kube-api-access-k9mjd\") pod \"kube-state-metrics-0\" (UID: \"eb6e937f-acf9-4ee8-8ee9-c757535b3a53\") " pod="openstack/kube-state-metrics-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.682858 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="068081be-0cae-4b93-a5a4-cefe01fe6396" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.683454 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6e937f-acf9-4ee8-8ee9-c757535b3a53-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"eb6e937f-acf9-4ee8-8ee9-c757535b3a53\") " pod="openstack/kube-state-metrics-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.700214 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6e937f-acf9-4ee8-8ee9-c757535b3a53-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"eb6e937f-acf9-4ee8-8ee9-c757535b3a53\") " pod="openstack/kube-state-metrics-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.715690 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9mjd\" (UniqueName: \"kubernetes.io/projected/eb6e937f-acf9-4ee8-8ee9-c757535b3a53-kube-api-access-k9mjd\") pod \"kube-state-metrics-0\" (UID: \"eb6e937f-acf9-4ee8-8ee9-c757535b3a53\") " pod="openstack/kube-state-metrics-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.730420 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="068081be-0cae-4b93-a5a4-cefe01fe6396" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.745975 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/eb6e937f-acf9-4ee8-8ee9-c757535b3a53-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"eb6e937f-acf9-4ee8-8ee9-c757535b3a53\") " pod="openstack/kube-state-metrics-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.885951 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.961185 4820 generic.go:334] "Generic (PLEG): container finished" podID="cc74c11f-b384-4945-92eb-82e0ea1d63f6" containerID="caaa615093f433b4b43cc4bacf0de51b6a4bfbb75b0e37f79d88436b1f13ccae" exitCode=143 Feb 03 12:35:24 crc kubenswrapper[4820]: I0203 12:35:24.961973 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc74c11f-b384-4945-92eb-82e0ea1d63f6","Type":"ContainerDied","Data":"caaa615093f433b4b43cc4bacf0de51b6a4bfbb75b0e37f79d88436b1f13ccae"} Feb 03 12:35:25 crc kubenswrapper[4820]: I0203 12:35:25.377223 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ae1a10e-b84f-4533-940c-0688f69fae7c" path="/var/lib/kubelet/pods/2ae1a10e-b84f-4533-940c-0688f69fae7c/volumes" Feb 03 12:35:25 crc kubenswrapper[4820]: I0203 12:35:25.822132 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 12:35:26 crc kubenswrapper[4820]: I0203 12:35:26.402001 4820 generic.go:334] "Generic (PLEG): container finished" podID="96e51574-4c0f-449e-99c9-f71651ddf08e" containerID="cab93bf98dd5daffd2433ee582a8708b834dbb042d829efa03eac43dcfc4f65e" exitCode=0 Feb 03 12:35:26 crc kubenswrapper[4820]: I0203 12:35:26.402365 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qmdlk" event={"ID":"96e51574-4c0f-449e-99c9-f71651ddf08e","Type":"ContainerDied","Data":"cab93bf98dd5daffd2433ee582a8708b834dbb042d829efa03eac43dcfc4f65e"} Feb 03 12:35:26 crc kubenswrapper[4820]: I0203 12:35:26.443596 4820 generic.go:334] "Generic (PLEG): container finished" podID="cc74c11f-b384-4945-92eb-82e0ea1d63f6" containerID="307e1758ba74f0436f771d69582cfb98330097f909d12a2f2e1d245c0fb91c0a" exitCode=0 Feb 03 12:35:26 crc kubenswrapper[4820]: I0203 12:35:26.443664 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc74c11f-b384-4945-92eb-82e0ea1d63f6","Type":"ContainerDied","Data":"307e1758ba74f0436f771d69582cfb98330097f909d12a2f2e1d245c0fb91c0a"} Feb 03 12:35:26 crc kubenswrapper[4820]: I0203 12:35:26.480911 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2","Type":"ContainerStarted","Data":"ba6fe612b482c19e2704048ebd92e86fa788ef5014c0754e4238178ee0ba16a2"} Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.060573 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.080443 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.159695 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc74c11f-b384-4945-92eb-82e0ea1d63f6-config-data\") pod \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.160164 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc74c11f-b384-4945-92eb-82e0ea1d63f6-logs\") pod \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.160330 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc74c11f-b384-4945-92eb-82e0ea1d63f6-combined-ca-bundle\") pod \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.160359 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr4bp\" (UniqueName: \"kubernetes.io/projected/cc74c11f-b384-4945-92eb-82e0ea1d63f6-kube-api-access-hr4bp\") pod \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\" (UID: \"cc74c11f-b384-4945-92eb-82e0ea1d63f6\") " Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.160817 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc74c11f-b384-4945-92eb-82e0ea1d63f6-logs" (OuterVolumeSpecName: "logs") pod "cc74c11f-b384-4945-92eb-82e0ea1d63f6" (UID: "cc74c11f-b384-4945-92eb-82e0ea1d63f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.161332 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc74c11f-b384-4945-92eb-82e0ea1d63f6-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.180160 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc74c11f-b384-4945-92eb-82e0ea1d63f6-kube-api-access-hr4bp" (OuterVolumeSpecName: "kube-api-access-hr4bp") pod "cc74c11f-b384-4945-92eb-82e0ea1d63f6" (UID: "cc74c11f-b384-4945-92eb-82e0ea1d63f6"). InnerVolumeSpecName "kube-api-access-hr4bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.204285 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc74c11f-b384-4945-92eb-82e0ea1d63f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc74c11f-b384-4945-92eb-82e0ea1d63f6" (UID: "cc74c11f-b384-4945-92eb-82e0ea1d63f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.265364 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc74c11f-b384-4945-92eb-82e0ea1d63f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.265419 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr4bp\" (UniqueName: \"kubernetes.io/projected/cc74c11f-b384-4945-92eb-82e0ea1d63f6-kube-api-access-hr4bp\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.316404 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc74c11f-b384-4945-92eb-82e0ea1d63f6-config-data" (OuterVolumeSpecName: "config-data") pod "cc74c11f-b384-4945-92eb-82e0ea1d63f6" (UID: "cc74c11f-b384-4945-92eb-82e0ea1d63f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.368454 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc74c11f-b384-4945-92eb-82e0ea1d63f6-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.479029 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.587830 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35400f32-654e-47c4-8fbc-c802522c7c76-config-data\") pod \"35400f32-654e-47c4-8fbc-c802522c7c76\" (UID: \"35400f32-654e-47c4-8fbc-c802522c7c76\") " Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.587963 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35400f32-654e-47c4-8fbc-c802522c7c76-combined-ca-bundle\") pod \"35400f32-654e-47c4-8fbc-c802522c7c76\" (UID: \"35400f32-654e-47c4-8fbc-c802522c7c76\") " Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.588063 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw7wq\" (UniqueName: \"kubernetes.io/projected/35400f32-654e-47c4-8fbc-c802522c7c76-kube-api-access-jw7wq\") pod \"35400f32-654e-47c4-8fbc-c802522c7c76\" (UID: \"35400f32-654e-47c4-8fbc-c802522c7c76\") " Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.595105 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc74c11f-b384-4945-92eb-82e0ea1d63f6","Type":"ContainerDied","Data":"9ac0eb72521ce1b282d2fc56682b5530ac78c116f9de1054778f3bd88aa9255a"} Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.595244 4820 scope.go:117] "RemoveContainer" containerID="307e1758ba74f0436f771d69582cfb98330097f909d12a2f2e1d245c0fb91c0a" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.595576 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.630880 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35400f32-654e-47c4-8fbc-c802522c7c76-kube-api-access-jw7wq" (OuterVolumeSpecName: "kube-api-access-jw7wq") pod "35400f32-654e-47c4-8fbc-c802522c7c76" (UID: "35400f32-654e-47c4-8fbc-c802522c7c76"). InnerVolumeSpecName "kube-api-access-jw7wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.668405 4820 generic.go:334] "Generic (PLEG): container finished" podID="35400f32-654e-47c4-8fbc-c802522c7c76" containerID="eb53cd7b331e90344f147e4521c9a2ad1f7202f91868835c56206b7e57ddfb38" exitCode=0 Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.668510 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"35400f32-654e-47c4-8fbc-c802522c7c76","Type":"ContainerDied","Data":"eb53cd7b331e90344f147e4521c9a2ad1f7202f91868835c56206b7e57ddfb38"} Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.668545 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"35400f32-654e-47c4-8fbc-c802522c7c76","Type":"ContainerDied","Data":"5ff263fb0f93546748c0bd21bb0c9bae0575d3e3ec52a70070ae228bc1197825"} Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.668613 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.691101 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw7wq\" (UniqueName: \"kubernetes.io/projected/35400f32-654e-47c4-8fbc-c802522c7c76-kube-api-access-jw7wq\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.691947 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35400f32-654e-47c4-8fbc-c802522c7c76-config-data" (OuterVolumeSpecName: "config-data") pod "35400f32-654e-47c4-8fbc-c802522c7c76" (UID: "35400f32-654e-47c4-8fbc-c802522c7c76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.693507 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eb6e937f-acf9-4ee8-8ee9-c757535b3a53","Type":"ContainerStarted","Data":"18ff08024e5583ee4952648554f06a72ccc61fa4f716e6e68292eb94cfc75e3b"} Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.710084 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35400f32-654e-47c4-8fbc-c802522c7c76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35400f32-654e-47c4-8fbc-c802522c7c76" (UID: "35400f32-654e-47c4-8fbc-c802522c7c76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.747298 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.768511 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.795853 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35400f32-654e-47c4-8fbc-c802522c7c76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.795921 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35400f32-654e-47c4-8fbc-c802522c7c76-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.809133 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:35:27 crc kubenswrapper[4820]: E0203 12:35:27.809943 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc74c11f-b384-4945-92eb-82e0ea1d63f6" containerName="nova-metadata-log" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.809965 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc74c11f-b384-4945-92eb-82e0ea1d63f6" containerName="nova-metadata-log" Feb 03 12:35:27 crc kubenswrapper[4820]: E0203 12:35:27.810000 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35400f32-654e-47c4-8fbc-c802522c7c76" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.810024 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="35400f32-654e-47c4-8fbc-c802522c7c76" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 12:35:27 crc kubenswrapper[4820]: E0203 12:35:27.810077 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc74c11f-b384-4945-92eb-82e0ea1d63f6" containerName="nova-metadata-metadata" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.810089 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc74c11f-b384-4945-92eb-82e0ea1d63f6" containerName="nova-metadata-metadata" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.810166 4820 scope.go:117] "RemoveContainer" containerID="caaa615093f433b4b43cc4bacf0de51b6a4bfbb75b0e37f79d88436b1f13ccae" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.810410 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc74c11f-b384-4945-92eb-82e0ea1d63f6" containerName="nova-metadata-log" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.810438 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc74c11f-b384-4945-92eb-82e0ea1d63f6" containerName="nova-metadata-metadata" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.810453 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="35400f32-654e-47c4-8fbc-c802522c7c76" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.812086 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.823933 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.828434 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 12:35:27 crc kubenswrapper[4820]: I0203 12:35:27.858513 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.000588 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.000764 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be49e248-fd39-4289-8207-517fa3ec0d90-logs\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.000807 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.000840 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-config-data\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.000862 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44g2f\" (UniqueName: \"kubernetes.io/projected/be49e248-fd39-4289-8207-517fa3ec0d90-kube-api-access-44g2f\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.026110 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.036738 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.063604 4820 scope.go:117] "RemoveContainer" containerID="eb53cd7b331e90344f147e4521c9a2ad1f7202f91868835c56206b7e57ddfb38" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.068043 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.078537 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.092814 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.093141 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.093309 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.098313 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.112665 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33bbf307-c8f9-402f-9b83-50d9d9b034c2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.112729 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbf307-c8f9-402f-9b83-50d9d9b034c2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.112837 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jj2s\" (UniqueName: \"kubernetes.io/projected/33bbf307-c8f9-402f-9b83-50d9d9b034c2-kube-api-access-4jj2s\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.112982 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33bbf307-c8f9-402f-9b83-50d9d9b034c2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.113027 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.113176 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be49e248-fd39-4289-8207-517fa3ec0d90-logs\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.113224 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.113281 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-config-data\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.113305 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44g2f\" (UniqueName: \"kubernetes.io/projected/be49e248-fd39-4289-8207-517fa3ec0d90-kube-api-access-44g2f\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.113691 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbf307-c8f9-402f-9b83-50d9d9b034c2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.114908 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be49e248-fd39-4289-8207-517fa3ec0d90-logs\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.139306 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44g2f\" (UniqueName: \"kubernetes.io/projected/be49e248-fd39-4289-8207-517fa3ec0d90-kube-api-access-44g2f\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.138493 4820 scope.go:117] "RemoveContainer" containerID="eb53cd7b331e90344f147e4521c9a2ad1f7202f91868835c56206b7e57ddfb38" Feb 03 12:35:28 crc kubenswrapper[4820]: E0203 12:35:28.143861 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb53cd7b331e90344f147e4521c9a2ad1f7202f91868835c56206b7e57ddfb38\": container with ID starting with eb53cd7b331e90344f147e4521c9a2ad1f7202f91868835c56206b7e57ddfb38 not found: ID does not exist" containerID="eb53cd7b331e90344f147e4521c9a2ad1f7202f91868835c56206b7e57ddfb38" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.144012 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb53cd7b331e90344f147e4521c9a2ad1f7202f91868835c56206b7e57ddfb38"} err="failed to get container status \"eb53cd7b331e90344f147e4521c9a2ad1f7202f91868835c56206b7e57ddfb38\": rpc error: code = NotFound desc = could not find container \"eb53cd7b331e90344f147e4521c9a2ad1f7202f91868835c56206b7e57ddfb38\": container with ID starting with eb53cd7b331e90344f147e4521c9a2ad1f7202f91868835c56206b7e57ddfb38 not found: ID does not exist" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.145858 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.152597 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.152653 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-config-data\") pod \"nova-metadata-0\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.222909 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33bbf307-c8f9-402f-9b83-50d9d9b034c2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.223437 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbf307-c8f9-402f-9b83-50d9d9b034c2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.223477 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jj2s\" (UniqueName: \"kubernetes.io/projected/33bbf307-c8f9-402f-9b83-50d9d9b034c2-kube-api-access-4jj2s\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.223534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33bbf307-c8f9-402f-9b83-50d9d9b034c2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.223684 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbf307-c8f9-402f-9b83-50d9d9b034c2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.243522 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbf307-c8f9-402f-9b83-50d9d9b034c2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.245881 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.301032 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33bbf307-c8f9-402f-9b83-50d9d9b034c2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.308845 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/33bbf307-c8f9-402f-9b83-50d9d9b034c2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.309028 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jj2s\" (UniqueName: \"kubernetes.io/projected/33bbf307-c8f9-402f-9b83-50d9d9b034c2-kube-api-access-4jj2s\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.309089 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33bbf307-c8f9-402f-9b83-50d9d9b034c2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"33bbf307-c8f9-402f-9b83-50d9d9b034c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.431552 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.448069 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.642499 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-combined-ca-bundle\") pod \"96e51574-4c0f-449e-99c9-f71651ddf08e\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.642927 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw5xd\" (UniqueName: \"kubernetes.io/projected/96e51574-4c0f-449e-99c9-f71651ddf08e-kube-api-access-dw5xd\") pod \"96e51574-4c0f-449e-99c9-f71651ddf08e\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.643012 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-scripts\") pod \"96e51574-4c0f-449e-99c9-f71651ddf08e\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.643182 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-config-data\") pod \"96e51574-4c0f-449e-99c9-f71651ddf08e\" (UID: \"96e51574-4c0f-449e-99c9-f71651ddf08e\") " Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.651985 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-scripts" (OuterVolumeSpecName: "scripts") pod "96e51574-4c0f-449e-99c9-f71651ddf08e" (UID: "96e51574-4c0f-449e-99c9-f71651ddf08e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.670424 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e51574-4c0f-449e-99c9-f71651ddf08e-kube-api-access-dw5xd" (OuterVolumeSpecName: "kube-api-access-dw5xd") pod "96e51574-4c0f-449e-99c9-f71651ddf08e" (UID: "96e51574-4c0f-449e-99c9-f71651ddf08e"). InnerVolumeSpecName "kube-api-access-dw5xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.697742 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.698003 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="068081be-0cae-4b93-a5a4-cefe01fe6396" containerName="nova-api-log" containerID="cri-o://e524e9a6b7132d40dc2d2e7e05e31e1943d38d1dc9eaf46dfd210b4b6ec05c7b" gracePeriod=30 Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.698085 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="068081be-0cae-4b93-a5a4-cefe01fe6396" containerName="nova-api-api" containerID="cri-o://6ff3515e211661ceb53889d00019920e29948152b51a89f695ed4b2cbfbcfbe5" gracePeriod=30 Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.750219 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.751858 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw5xd\" (UniqueName: \"kubernetes.io/projected/96e51574-4c0f-449e-99c9-f71651ddf08e-kube-api-access-dw5xd\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.751910 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.758622 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96e51574-4c0f-449e-99c9-f71651ddf08e" (UID: "96e51574-4c0f-449e-99c9-f71651ddf08e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.759072 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eb6e937f-acf9-4ee8-8ee9-c757535b3a53","Type":"ContainerStarted","Data":"9db317cebcec5526d3b313766a3542d10f1c2d68f73cdf3529ef719f0c3aa0ba"} Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.759509 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.769823 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-config-data" (OuterVolumeSpecName: "config-data") pod "96e51574-4c0f-449e-99c9-f71651ddf08e" (UID: "96e51574-4c0f-449e-99c9-f71651ddf08e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.795297 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qmdlk" event={"ID":"96e51574-4c0f-449e-99c9-f71651ddf08e","Type":"ContainerDied","Data":"734a15fc02924f45f95d9da96b2317d1b39ac507d76ee94eb0d52e6a7d330883"} Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.795360 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="734a15fc02924f45f95d9da96b2317d1b39ac507d76ee94eb0d52e6a7d330883" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.795460 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qmdlk" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.796054 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=5.269759762 podStartE2EDuration="5.796028201s" podCreationTimestamp="2026-02-03 12:35:23 +0000 UTC" firstStartedPulling="2026-02-03 12:35:27.13699576 +0000 UTC m=+1844.660071624" lastFinishedPulling="2026-02-03 12:35:27.663264199 +0000 UTC m=+1845.186340063" observedRunningTime="2026-02-03 12:35:28.790274355 +0000 UTC m=+1846.313350239" watchObservedRunningTime="2026-02-03 12:35:28.796028201 +0000 UTC m=+1846.319104065" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.855029 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:28 crc kubenswrapper[4820]: I0203 12:35:28.855314 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e51574-4c0f-449e-99c9-f71651ddf08e-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.045791 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.046036 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="eeca15ca-fac4-4279-b44b-8929705a4dfb" containerName="nova-scheduler-scheduler" containerID="cri-o://6b4fcf19e0d033e1be0fdc8ea435071cfd1953128301d85233af40464f6e362b" gracePeriod=30 Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.061298 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.143948 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:35:29 crc kubenswrapper[4820]: E0203 12:35:29.144244 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.167953 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35400f32-654e-47c4-8fbc-c802522c7c76" path="/var/lib/kubelet/pods/35400f32-654e-47c4-8fbc-c802522c7c76/volumes" Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.168637 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc74c11f-b384-4945-92eb-82e0ea1d63f6" path="/var/lib/kubelet/pods/cc74c11f-b384-4945-92eb-82e0ea1d63f6/volumes" Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.234077 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:35:29 crc kubenswrapper[4820]: E0203 12:35:29.472694 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17c371f7_f032_4444_8d4b_1183a224c7b0.slice/crio-conmon-0f246c95365efface890ae91ed527bfb457eda5dc7b27fa8cda294f51a37e248.scope\": RecentStats: unable to find data in memory cache]" Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.825646 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2","Type":"ContainerStarted","Data":"214de90213c07a9c09353aaf0b1a81086e9cf3262bd484b3a57bd6c5054264c6"} Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.825795 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.828486 4820 generic.go:334] "Generic (PLEG): container finished" podID="068081be-0cae-4b93-a5a4-cefe01fe6396" containerID="e524e9a6b7132d40dc2d2e7e05e31e1943d38d1dc9eaf46dfd210b4b6ec05c7b" exitCode=143 Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.828553 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"068081be-0cae-4b93-a5a4-cefe01fe6396","Type":"ContainerDied","Data":"e524e9a6b7132d40dc2d2e7e05e31e1943d38d1dc9eaf46dfd210b4b6ec05c7b"} Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.830265 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"33bbf307-c8f9-402f-9b83-50d9d9b034c2","Type":"ContainerStarted","Data":"09a2c70fdc381041858ecfc8cea91cc9f82dbd1603348cecb1cf35dfdbbc4979"} Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.830294 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"33bbf307-c8f9-402f-9b83-50d9d9b034c2","Type":"ContainerStarted","Data":"d244b56f834995b35fd9cf661845cd5676d1c23564697eb5e589cb4370f6f628"} Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.844532 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="be49e248-fd39-4289-8207-517fa3ec0d90" containerName="nova-metadata-log" containerID="cri-o://d9c184037d477a29fe6fd8c82acb14ac963207c270f2fa51dff1d8c2fbd30627" gracePeriod=30 Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.844842 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be49e248-fd39-4289-8207-517fa3ec0d90","Type":"ContainerStarted","Data":"c449ebc96060118b22140617d1169269446fc93d45e0810fa81e53cb1c180aea"} Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.844877 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be49e248-fd39-4289-8207-517fa3ec0d90","Type":"ContainerStarted","Data":"d9c184037d477a29fe6fd8c82acb14ac963207c270f2fa51dff1d8c2fbd30627"} Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.844906 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be49e248-fd39-4289-8207-517fa3ec0d90","Type":"ContainerStarted","Data":"b60902fcab7868de02f56cb381e6d9c83d26a3ac7f9a0d7a5e39d6ce7d320ee6"} Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.844963 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="be49e248-fd39-4289-8207-517fa3ec0d90" containerName="nova-metadata-metadata" containerID="cri-o://c449ebc96060118b22140617d1169269446fc93d45e0810fa81e53cb1c180aea" gracePeriod=30 Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.889366 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.692606094 podStartE2EDuration="14.889341114s" podCreationTimestamp="2026-02-03 12:35:15 +0000 UTC" firstStartedPulling="2026-02-03 12:35:17.594979214 +0000 UTC m=+1835.118055078" lastFinishedPulling="2026-02-03 12:35:28.791714234 +0000 UTC m=+1846.314790098" observedRunningTime="2026-02-03 12:35:29.872489267 +0000 UTC m=+1847.395565131" watchObservedRunningTime="2026-02-03 12:35:29.889341114 +0000 UTC m=+1847.412416988" Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.922090 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.92207062 podStartE2EDuration="2.92207062s" podCreationTimestamp="2026-02-03 12:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:35:29.919381257 +0000 UTC m=+1847.442457131" watchObservedRunningTime="2026-02-03 12:35:29.92207062 +0000 UTC m=+1847.445146474" Feb 03 12:35:29 crc kubenswrapper[4820]: I0203 12:35:29.955006 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.9549847420000002 podStartE2EDuration="1.954984742s" podCreationTimestamp="2026-02-03 12:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:35:29.947576471 +0000 UTC m=+1847.470652335" watchObservedRunningTime="2026-02-03 12:35:29.954984742 +0000 UTC m=+1847.478060606" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.505477 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.621755 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbqnm\" (UniqueName: \"kubernetes.io/projected/eeca15ca-fac4-4279-b44b-8929705a4dfb-kube-api-access-tbqnm\") pod \"eeca15ca-fac4-4279-b44b-8929705a4dfb\" (UID: \"eeca15ca-fac4-4279-b44b-8929705a4dfb\") " Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.622039 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeca15ca-fac4-4279-b44b-8929705a4dfb-combined-ca-bundle\") pod \"eeca15ca-fac4-4279-b44b-8929705a4dfb\" (UID: \"eeca15ca-fac4-4279-b44b-8929705a4dfb\") " Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.622192 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeca15ca-fac4-4279-b44b-8929705a4dfb-config-data\") pod \"eeca15ca-fac4-4279-b44b-8929705a4dfb\" (UID: \"eeca15ca-fac4-4279-b44b-8929705a4dfb\") " Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.630179 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeca15ca-fac4-4279-b44b-8929705a4dfb-kube-api-access-tbqnm" (OuterVolumeSpecName: "kube-api-access-tbqnm") pod "eeca15ca-fac4-4279-b44b-8929705a4dfb" (UID: "eeca15ca-fac4-4279-b44b-8929705a4dfb"). InnerVolumeSpecName "kube-api-access-tbqnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.673168 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeca15ca-fac4-4279-b44b-8929705a4dfb-config-data" (OuterVolumeSpecName: "config-data") pod "eeca15ca-fac4-4279-b44b-8929705a4dfb" (UID: "eeca15ca-fac4-4279-b44b-8929705a4dfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.681252 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.726401 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbqnm\" (UniqueName: \"kubernetes.io/projected/eeca15ca-fac4-4279-b44b-8929705a4dfb-kube-api-access-tbqnm\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.726442 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeca15ca-fac4-4279-b44b-8929705a4dfb-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.729064 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeca15ca-fac4-4279-b44b-8929705a4dfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eeca15ca-fac4-4279-b44b-8929705a4dfb" (UID: "eeca15ca-fac4-4279-b44b-8929705a4dfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.828533 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeca15ca-fac4-4279-b44b-8929705a4dfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.856604 4820 generic.go:334] "Generic (PLEG): container finished" podID="eeca15ca-fac4-4279-b44b-8929705a4dfb" containerID="6b4fcf19e0d033e1be0fdc8ea435071cfd1953128301d85233af40464f6e362b" exitCode=0 Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.856700 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.856729 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eeca15ca-fac4-4279-b44b-8929705a4dfb","Type":"ContainerDied","Data":"6b4fcf19e0d033e1be0fdc8ea435071cfd1953128301d85233af40464f6e362b"} Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.857067 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eeca15ca-fac4-4279-b44b-8929705a4dfb","Type":"ContainerDied","Data":"3545696a53140af6dd697adf873d00f4d81ad24c64d8965cd987c7b170810d6a"} Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.857131 4820 scope.go:117] "RemoveContainer" containerID="6b4fcf19e0d033e1be0fdc8ea435071cfd1953128301d85233af40464f6e362b" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.860446 4820 generic.go:334] "Generic (PLEG): container finished" podID="be49e248-fd39-4289-8207-517fa3ec0d90" containerID="d9c184037d477a29fe6fd8c82acb14ac963207c270f2fa51dff1d8c2fbd30627" exitCode=143 Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.860531 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be49e248-fd39-4289-8207-517fa3ec0d90","Type":"ContainerDied","Data":"d9c184037d477a29fe6fd8c82acb14ac963207c270f2fa51dff1d8c2fbd30627"} Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.863043 4820 generic.go:334] "Generic (PLEG): container finished" podID="03344b7f-772a-4f59-9955-99a923bd9fee" containerID="17517eaf1b7daae15a9f186aa6d51c7fa4ac86a2ede0b331062db143c586a3f3" exitCode=0 Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.864176 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gcx8s" event={"ID":"03344b7f-772a-4f59-9955-99a923bd9fee","Type":"ContainerDied","Data":"17517eaf1b7daae15a9f186aa6d51c7fa4ac86a2ede0b331062db143c586a3f3"} Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.902798 4820 scope.go:117] "RemoveContainer" containerID="6b4fcf19e0d033e1be0fdc8ea435071cfd1953128301d85233af40464f6e362b" Feb 03 12:35:30 crc kubenswrapper[4820]: E0203 12:35:30.906380 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b4fcf19e0d033e1be0fdc8ea435071cfd1953128301d85233af40464f6e362b\": container with ID starting with 6b4fcf19e0d033e1be0fdc8ea435071cfd1953128301d85233af40464f6e362b not found: ID does not exist" containerID="6b4fcf19e0d033e1be0fdc8ea435071cfd1953128301d85233af40464f6e362b" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.906459 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b4fcf19e0d033e1be0fdc8ea435071cfd1953128301d85233af40464f6e362b"} err="failed to get container status \"6b4fcf19e0d033e1be0fdc8ea435071cfd1953128301d85233af40464f6e362b\": rpc error: code = NotFound desc = could not find container \"6b4fcf19e0d033e1be0fdc8ea435071cfd1953128301d85233af40464f6e362b\": container with ID starting with 6b4fcf19e0d033e1be0fdc8ea435071cfd1953128301d85233af40464f6e362b not found: ID does not exist" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.962494 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.986977 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.996933 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:35:30 crc kubenswrapper[4820]: E0203 12:35:30.997672 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeca15ca-fac4-4279-b44b-8929705a4dfb" containerName="nova-scheduler-scheduler" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.997695 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeca15ca-fac4-4279-b44b-8929705a4dfb" containerName="nova-scheduler-scheduler" Feb 03 12:35:30 crc kubenswrapper[4820]: E0203 12:35:30.997712 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e51574-4c0f-449e-99c9-f71651ddf08e" containerName="nova-manage" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.997721 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e51574-4c0f-449e-99c9-f71651ddf08e" containerName="nova-manage" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.998499 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e51574-4c0f-449e-99c9-f71651ddf08e" containerName="nova-manage" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.998527 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeca15ca-fac4-4279-b44b-8929705a4dfb" containerName="nova-scheduler-scheduler" Feb 03 12:35:30 crc kubenswrapper[4820]: I0203 12:35:30.999516 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.013232 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.023765 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.143348 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a80e13-273e-46e5-b11c-7b864bd07a08-config-data\") pod \"nova-scheduler-0\" (UID: \"e5a80e13-273e-46e5-b11c-7b864bd07a08\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.143411 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a80e13-273e-46e5-b11c-7b864bd07a08-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5a80e13-273e-46e5-b11c-7b864bd07a08\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.143519 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwq7j\" (UniqueName: \"kubernetes.io/projected/e5a80e13-273e-46e5-b11c-7b864bd07a08-kube-api-access-hwq7j\") pod \"nova-scheduler-0\" (UID: \"e5a80e13-273e-46e5-b11c-7b864bd07a08\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.177863 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeca15ca-fac4-4279-b44b-8929705a4dfb" path="/var/lib/kubelet/pods/eeca15ca-fac4-4279-b44b-8929705a4dfb/volumes" Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.245685 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwq7j\" (UniqueName: \"kubernetes.io/projected/e5a80e13-273e-46e5-b11c-7b864bd07a08-kube-api-access-hwq7j\") pod \"nova-scheduler-0\" (UID: \"e5a80e13-273e-46e5-b11c-7b864bd07a08\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.246412 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a80e13-273e-46e5-b11c-7b864bd07a08-config-data\") pod \"nova-scheduler-0\" (UID: \"e5a80e13-273e-46e5-b11c-7b864bd07a08\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.246471 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a80e13-273e-46e5-b11c-7b864bd07a08-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5a80e13-273e-46e5-b11c-7b864bd07a08\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.257013 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a80e13-273e-46e5-b11c-7b864bd07a08-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5a80e13-273e-46e5-b11c-7b864bd07a08\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.266229 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a80e13-273e-46e5-b11c-7b864bd07a08-config-data\") pod \"nova-scheduler-0\" (UID: \"e5a80e13-273e-46e5-b11c-7b864bd07a08\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.270991 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwq7j\" (UniqueName: \"kubernetes.io/projected/e5a80e13-273e-46e5-b11c-7b864bd07a08-kube-api-access-hwq7j\") pod \"nova-scheduler-0\" (UID: \"e5a80e13-273e-46e5-b11c-7b864bd07a08\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.322585 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.874315 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="ceilometer-central-agent" containerID="cri-o://7c77932df498b34e70949f89a8806eb41ff4cefb199580c1ab5f36b72e32fdcc" gracePeriod=30 Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.874769 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="ceilometer-notification-agent" containerID="cri-o://492702291ee0f8bba54f9ffadd3bf1dc83f97d05b5e361c108fd74a5937b38bc" gracePeriod=30 Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.874780 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="proxy-httpd" containerID="cri-o://214de90213c07a9c09353aaf0b1a81086e9cf3262bd484b3a57bd6c5054264c6" gracePeriod=30 Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.874836 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="sg-core" containerID="cri-o://ba6fe612b482c19e2704048ebd92e86fa788ef5014c0754e4238178ee0ba16a2" gracePeriod=30 Feb 03 12:35:31 crc kubenswrapper[4820]: I0203 12:35:31.940883 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:35:31 crc kubenswrapper[4820]: W0203 12:35:31.954773 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5a80e13_273e_46e5_b11c_7b864bd07a08.slice/crio-4c8fa0ff4f5828fa07e307b53d01cc1f00256a2dedb6ac45fa572653ce635424 WatchSource:0}: Error finding container 4c8fa0ff4f5828fa07e307b53d01cc1f00256a2dedb6ac45fa572653ce635424: Status 404 returned error can't find the container with id 4c8fa0ff4f5828fa07e307b53d01cc1f00256a2dedb6ac45fa572653ce635424 Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.296154 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.382205 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-scripts\") pod \"03344b7f-772a-4f59-9955-99a923bd9fee\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.382656 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm7xc\" (UniqueName: \"kubernetes.io/projected/03344b7f-772a-4f59-9955-99a923bd9fee-kube-api-access-rm7xc\") pod \"03344b7f-772a-4f59-9955-99a923bd9fee\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.383257 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-config-data\") pod \"03344b7f-772a-4f59-9955-99a923bd9fee\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.383317 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-combined-ca-bundle\") pod \"03344b7f-772a-4f59-9955-99a923bd9fee\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.388108 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-scripts" (OuterVolumeSpecName: "scripts") pod "03344b7f-772a-4f59-9955-99a923bd9fee" (UID: "03344b7f-772a-4f59-9955-99a923bd9fee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.390960 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03344b7f-772a-4f59-9955-99a923bd9fee-kube-api-access-rm7xc" (OuterVolumeSpecName: "kube-api-access-rm7xc") pod "03344b7f-772a-4f59-9955-99a923bd9fee" (UID: "03344b7f-772a-4f59-9955-99a923bd9fee"). InnerVolumeSpecName "kube-api-access-rm7xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:35:32 crc kubenswrapper[4820]: E0203 12:35:32.436801 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-combined-ca-bundle podName:03344b7f-772a-4f59-9955-99a923bd9fee nodeName:}" failed. No retries permitted until 2026-02-03 12:35:32.936764174 +0000 UTC m=+1850.459840038 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-combined-ca-bundle") pod "03344b7f-772a-4f59-9955-99a923bd9fee" (UID: "03344b7f-772a-4f59-9955-99a923bd9fee") : error deleting /var/lib/kubelet/pods/03344b7f-772a-4f59-9955-99a923bd9fee/volume-subpaths: remove /var/lib/kubelet/pods/03344b7f-772a-4f59-9955-99a923bd9fee/volume-subpaths: no such file or directory Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.443071 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-config-data" (OuterVolumeSpecName: "config-data") pod "03344b7f-772a-4f59-9955-99a923bd9fee" (UID: "03344b7f-772a-4f59-9955-99a923bd9fee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.486667 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.486726 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.486738 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm7xc\" (UniqueName: \"kubernetes.io/projected/03344b7f-772a-4f59-9955-99a923bd9fee-kube-api-access-rm7xc\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.886753 4820 generic.go:334] "Generic (PLEG): container finished" podID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerID="214de90213c07a9c09353aaf0b1a81086e9cf3262bd484b3a57bd6c5054264c6" exitCode=0 Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.886787 4820 generic.go:334] "Generic (PLEG): container finished" podID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerID="ba6fe612b482c19e2704048ebd92e86fa788ef5014c0754e4238178ee0ba16a2" exitCode=2 Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.886795 4820 generic.go:334] "Generic (PLEG): container finished" podID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerID="492702291ee0f8bba54f9ffadd3bf1dc83f97d05b5e361c108fd74a5937b38bc" exitCode=0 Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.886834 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2","Type":"ContainerDied","Data":"214de90213c07a9c09353aaf0b1a81086e9cf3262bd484b3a57bd6c5054264c6"} Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.886881 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2","Type":"ContainerDied","Data":"ba6fe612b482c19e2704048ebd92e86fa788ef5014c0754e4238178ee0ba16a2"} Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.886905 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2","Type":"ContainerDied","Data":"492702291ee0f8bba54f9ffadd3bf1dc83f97d05b5e361c108fd74a5937b38bc"} Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.889382 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gcx8s" event={"ID":"03344b7f-772a-4f59-9955-99a923bd9fee","Type":"ContainerDied","Data":"14a3c274a1fdf9682c22e5d93fa9d7808dcca36e375afabec3d3de8cfa7bc356"} Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.889423 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14a3c274a1fdf9682c22e5d93fa9d7808dcca36e375afabec3d3de8cfa7bc356" Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.889395 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gcx8s" Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.891426 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5a80e13-273e-46e5-b11c-7b864bd07a08","Type":"ContainerStarted","Data":"599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8"} Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.891450 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5a80e13-273e-46e5-b11c-7b864bd07a08","Type":"ContainerStarted","Data":"4c8fa0ff4f5828fa07e307b53d01cc1f00256a2dedb6ac45fa572653ce635424"} Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.915866 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.915846125 podStartE2EDuration="2.915846125s" podCreationTimestamp="2026-02-03 12:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:35:32.906943074 +0000 UTC m=+1850.430018958" watchObservedRunningTime="2026-02-03 12:35:32.915846125 +0000 UTC m=+1850.438921979" Feb 03 12:35:32 crc kubenswrapper[4820]: I0203 12:35:32.997793 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-combined-ca-bundle\") pod \"03344b7f-772a-4f59-9955-99a923bd9fee\" (UID: \"03344b7f-772a-4f59-9955-99a923bd9fee\") " Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.002776 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03344b7f-772a-4f59-9955-99a923bd9fee" (UID: "03344b7f-772a-4f59-9955-99a923bd9fee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.040077 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 12:35:33 crc kubenswrapper[4820]: E0203 12:35:33.040550 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03344b7f-772a-4f59-9955-99a923bd9fee" containerName="nova-cell1-conductor-db-sync" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.040568 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="03344b7f-772a-4f59-9955-99a923bd9fee" containerName="nova-cell1-conductor-db-sync" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.040764 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="03344b7f-772a-4f59-9955-99a923bd9fee" containerName="nova-cell1-conductor-db-sync" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.041560 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.069449 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.100092 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqdn2\" (UniqueName: \"kubernetes.io/projected/c362e3ce-ca7f-443e-ab57-57f34e89e883-kube-api-access-pqdn2\") pod \"nova-cell1-conductor-0\" (UID: \"c362e3ce-ca7f-443e-ab57-57f34e89e883\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.100159 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c362e3ce-ca7f-443e-ab57-57f34e89e883-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c362e3ce-ca7f-443e-ab57-57f34e89e883\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.100209 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c362e3ce-ca7f-443e-ab57-57f34e89e883-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c362e3ce-ca7f-443e-ab57-57f34e89e883\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.100494 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03344b7f-772a-4f59-9955-99a923bd9fee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.207548 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqdn2\" (UniqueName: \"kubernetes.io/projected/c362e3ce-ca7f-443e-ab57-57f34e89e883-kube-api-access-pqdn2\") pod \"nova-cell1-conductor-0\" (UID: \"c362e3ce-ca7f-443e-ab57-57f34e89e883\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.207599 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c362e3ce-ca7f-443e-ab57-57f34e89e883-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c362e3ce-ca7f-443e-ab57-57f34e89e883\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.207640 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c362e3ce-ca7f-443e-ab57-57f34e89e883-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c362e3ce-ca7f-443e-ab57-57f34e89e883\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.213287 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c362e3ce-ca7f-443e-ab57-57f34e89e883-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c362e3ce-ca7f-443e-ab57-57f34e89e883\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.225740 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqdn2\" (UniqueName: \"kubernetes.io/projected/c362e3ce-ca7f-443e-ab57-57f34e89e883-kube-api-access-pqdn2\") pod \"nova-cell1-conductor-0\" (UID: \"c362e3ce-ca7f-443e-ab57-57f34e89e883\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.225776 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c362e3ce-ca7f-443e-ab57-57f34e89e883-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c362e3ce-ca7f-443e-ab57-57f34e89e883\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.247575 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.247682 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.424665 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 03 12:35:33 crc kubenswrapper[4820]: I0203 12:35:33.432334 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:34 crc kubenswrapper[4820]: I0203 12:35:34.062838 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 12:35:34 crc kubenswrapper[4820]: I0203 12:35:34.911998 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 03 12:35:34 crc kubenswrapper[4820]: I0203 12:35:34.928244 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c362e3ce-ca7f-443e-ab57-57f34e89e883","Type":"ContainerStarted","Data":"81268d47c12941ad4bbc8bbfedf6c79b13d390fa74cb75385d1d079533ed55f6"} Feb 03 12:35:34 crc kubenswrapper[4820]: I0203 12:35:34.928308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c362e3ce-ca7f-443e-ab57-57f34e89e883","Type":"ContainerStarted","Data":"9108bc47fdd9c3b2c627cdb4ab2a008b30ae15c628c7c6359e5d46244dd3c470"} Feb 03 12:35:34 crc kubenswrapper[4820]: I0203 12:35:34.929554 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 03 12:35:34 crc kubenswrapper[4820]: I0203 12:35:34.980090 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.9800622140000002 podStartE2EDuration="1.980062214s" podCreationTimestamp="2026-02-03 12:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:35:34.968333676 +0000 UTC m=+1852.491409560" watchObservedRunningTime="2026-02-03 12:35:34.980062214 +0000 UTC m=+1852.503138088" Feb 03 12:35:36 crc kubenswrapper[4820]: I0203 12:35:36.323436 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 12:35:36 crc kubenswrapper[4820]: I0203 12:35:36.859776 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:35:36 crc kubenswrapper[4820]: I0203 12:35:36.957625 4820 generic.go:334] "Generic (PLEG): container finished" podID="068081be-0cae-4b93-a5a4-cefe01fe6396" containerID="6ff3515e211661ceb53889d00019920e29948152b51a89f695ed4b2cbfbcfbe5" exitCode=0 Feb 03 12:35:36 crc kubenswrapper[4820]: I0203 12:35:36.957731 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:35:36 crc kubenswrapper[4820]: I0203 12:35:36.957737 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"068081be-0cae-4b93-a5a4-cefe01fe6396","Type":"ContainerDied","Data":"6ff3515e211661ceb53889d00019920e29948152b51a89f695ed4b2cbfbcfbe5"} Feb 03 12:35:36 crc kubenswrapper[4820]: I0203 12:35:36.957784 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"068081be-0cae-4b93-a5a4-cefe01fe6396","Type":"ContainerDied","Data":"3dfd7a660a987a0f508332a2eeb77a03adf50d545d723e0bd7714f20945ae2bc"} Feb 03 12:35:36 crc kubenswrapper[4820]: I0203 12:35:36.957806 4820 scope.go:117] "RemoveContainer" containerID="6ff3515e211661ceb53889d00019920e29948152b51a89f695ed4b2cbfbcfbe5" Feb 03 12:35:36 crc kubenswrapper[4820]: I0203 12:35:36.966026 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068081be-0cae-4b93-a5a4-cefe01fe6396-logs\") pod \"068081be-0cae-4b93-a5a4-cefe01fe6396\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " Feb 03 12:35:36 crc kubenswrapper[4820]: I0203 12:35:36.966186 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068081be-0cae-4b93-a5a4-cefe01fe6396-config-data\") pod \"068081be-0cae-4b93-a5a4-cefe01fe6396\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " Feb 03 12:35:36 crc kubenswrapper[4820]: I0203 12:35:36.966311 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvz7m\" (UniqueName: \"kubernetes.io/projected/068081be-0cae-4b93-a5a4-cefe01fe6396-kube-api-access-jvz7m\") pod \"068081be-0cae-4b93-a5a4-cefe01fe6396\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " Feb 03 12:35:36 crc kubenswrapper[4820]: I0203 12:35:36.966404 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068081be-0cae-4b93-a5a4-cefe01fe6396-combined-ca-bundle\") pod \"068081be-0cae-4b93-a5a4-cefe01fe6396\" (UID: \"068081be-0cae-4b93-a5a4-cefe01fe6396\") " Feb 03 12:35:36 crc kubenswrapper[4820]: I0203 12:35:36.966572 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/068081be-0cae-4b93-a5a4-cefe01fe6396-logs" (OuterVolumeSpecName: "logs") pod "068081be-0cae-4b93-a5a4-cefe01fe6396" (UID: "068081be-0cae-4b93-a5a4-cefe01fe6396"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:35:36 crc kubenswrapper[4820]: I0203 12:35:36.966998 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068081be-0cae-4b93-a5a4-cefe01fe6396-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:36 crc kubenswrapper[4820]: I0203 12:35:36.972776 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068081be-0cae-4b93-a5a4-cefe01fe6396-kube-api-access-jvz7m" (OuterVolumeSpecName: "kube-api-access-jvz7m") pod "068081be-0cae-4b93-a5a4-cefe01fe6396" (UID: "068081be-0cae-4b93-a5a4-cefe01fe6396"). InnerVolumeSpecName "kube-api-access-jvz7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.004506 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068081be-0cae-4b93-a5a4-cefe01fe6396-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "068081be-0cae-4b93-a5a4-cefe01fe6396" (UID: "068081be-0cae-4b93-a5a4-cefe01fe6396"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.006378 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068081be-0cae-4b93-a5a4-cefe01fe6396-config-data" (OuterVolumeSpecName: "config-data") pod "068081be-0cae-4b93-a5a4-cefe01fe6396" (UID: "068081be-0cae-4b93-a5a4-cefe01fe6396"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.027537 4820 scope.go:117] "RemoveContainer" containerID="e524e9a6b7132d40dc2d2e7e05e31e1943d38d1dc9eaf46dfd210b4b6ec05c7b" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.166047 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068081be-0cae-4b93-a5a4-cefe01fe6396-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.166092 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvz7m\" (UniqueName: \"kubernetes.io/projected/068081be-0cae-4b93-a5a4-cefe01fe6396-kube-api-access-jvz7m\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.166115 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068081be-0cae-4b93-a5a4-cefe01fe6396-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.206376 4820 scope.go:117] "RemoveContainer" containerID="6ff3515e211661ceb53889d00019920e29948152b51a89f695ed4b2cbfbcfbe5" Feb 03 12:35:37 crc kubenswrapper[4820]: E0203 12:35:37.206944 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff3515e211661ceb53889d00019920e29948152b51a89f695ed4b2cbfbcfbe5\": container with ID starting with 6ff3515e211661ceb53889d00019920e29948152b51a89f695ed4b2cbfbcfbe5 not found: ID does not exist" containerID="6ff3515e211661ceb53889d00019920e29948152b51a89f695ed4b2cbfbcfbe5" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.207030 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff3515e211661ceb53889d00019920e29948152b51a89f695ed4b2cbfbcfbe5"} err="failed to get container status \"6ff3515e211661ceb53889d00019920e29948152b51a89f695ed4b2cbfbcfbe5\": rpc error: code = NotFound desc = could not find container \"6ff3515e211661ceb53889d00019920e29948152b51a89f695ed4b2cbfbcfbe5\": container with ID starting with 6ff3515e211661ceb53889d00019920e29948152b51a89f695ed4b2cbfbcfbe5 not found: ID does not exist" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.207094 4820 scope.go:117] "RemoveContainer" containerID="e524e9a6b7132d40dc2d2e7e05e31e1943d38d1dc9eaf46dfd210b4b6ec05c7b" Feb 03 12:35:37 crc kubenswrapper[4820]: E0203 12:35:37.207455 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e524e9a6b7132d40dc2d2e7e05e31e1943d38d1dc9eaf46dfd210b4b6ec05c7b\": container with ID starting with e524e9a6b7132d40dc2d2e7e05e31e1943d38d1dc9eaf46dfd210b4b6ec05c7b not found: ID does not exist" containerID="e524e9a6b7132d40dc2d2e7e05e31e1943d38d1dc9eaf46dfd210b4b6ec05c7b" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.207507 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e524e9a6b7132d40dc2d2e7e05e31e1943d38d1dc9eaf46dfd210b4b6ec05c7b"} err="failed to get container status \"e524e9a6b7132d40dc2d2e7e05e31e1943d38d1dc9eaf46dfd210b4b6ec05c7b\": rpc error: code = NotFound desc = could not find container \"e524e9a6b7132d40dc2d2e7e05e31e1943d38d1dc9eaf46dfd210b4b6ec05c7b\": container with ID starting with e524e9a6b7132d40dc2d2e7e05e31e1943d38d1dc9eaf46dfd210b4b6ec05c7b not found: ID does not exist" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.283764 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.297030 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.311848 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 12:35:37 crc kubenswrapper[4820]: E0203 12:35:37.312377 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068081be-0cae-4b93-a5a4-cefe01fe6396" containerName="nova-api-api" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.312397 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="068081be-0cae-4b93-a5a4-cefe01fe6396" containerName="nova-api-api" Feb 03 12:35:37 crc kubenswrapper[4820]: E0203 12:35:37.312424 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068081be-0cae-4b93-a5a4-cefe01fe6396" containerName="nova-api-log" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.312431 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="068081be-0cae-4b93-a5a4-cefe01fe6396" containerName="nova-api-log" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.312644 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="068081be-0cae-4b93-a5a4-cefe01fe6396" containerName="nova-api-api" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.312669 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="068081be-0cae-4b93-a5a4-cefe01fe6396" containerName="nova-api-log" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.314128 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.317506 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.332252 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.369642 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71aba6f9-1efc-4b39-8a61-444e7399c8e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " pod="openstack/nova-api-0" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.369752 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwknf\" (UniqueName: \"kubernetes.io/projected/71aba6f9-1efc-4b39-8a61-444e7399c8e0-kube-api-access-rwknf\") pod \"nova-api-0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " pod="openstack/nova-api-0" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.369910 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71aba6f9-1efc-4b39-8a61-444e7399c8e0-config-data\") pod \"nova-api-0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " pod="openstack/nova-api-0" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.370026 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71aba6f9-1efc-4b39-8a61-444e7399c8e0-logs\") pod \"nova-api-0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " pod="openstack/nova-api-0" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.471518 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71aba6f9-1efc-4b39-8a61-444e7399c8e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " pod="openstack/nova-api-0" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.471593 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwknf\" (UniqueName: \"kubernetes.io/projected/71aba6f9-1efc-4b39-8a61-444e7399c8e0-kube-api-access-rwknf\") pod \"nova-api-0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " pod="openstack/nova-api-0" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.471697 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71aba6f9-1efc-4b39-8a61-444e7399c8e0-config-data\") pod \"nova-api-0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " pod="openstack/nova-api-0" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.471933 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71aba6f9-1efc-4b39-8a61-444e7399c8e0-logs\") pod \"nova-api-0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " pod="openstack/nova-api-0" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.472973 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71aba6f9-1efc-4b39-8a61-444e7399c8e0-logs\") pod \"nova-api-0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " pod="openstack/nova-api-0" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.476220 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71aba6f9-1efc-4b39-8a61-444e7399c8e0-config-data\") pod \"nova-api-0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " pod="openstack/nova-api-0" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.476842 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71aba6f9-1efc-4b39-8a61-444e7399c8e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " pod="openstack/nova-api-0" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.494061 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwknf\" (UniqueName: \"kubernetes.io/projected/71aba6f9-1efc-4b39-8a61-444e7399c8e0-kube-api-access-rwknf\") pod \"nova-api-0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " pod="openstack/nova-api-0" Feb 03 12:35:37 crc kubenswrapper[4820]: I0203 12:35:37.636998 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:35:38 crc kubenswrapper[4820]: I0203 12:35:38.292021 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:35:38 crc kubenswrapper[4820]: W0203 12:35:38.301769 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71aba6f9_1efc_4b39_8a61_444e7399c8e0.slice/crio-e6eb6d3468f137d12a675d63a6985c4f9a970dee9dbf3d461997d1b613a2fc13 WatchSource:0}: Error finding container e6eb6d3468f137d12a675d63a6985c4f9a970dee9dbf3d461997d1b613a2fc13: Status 404 returned error can't find the container with id e6eb6d3468f137d12a675d63a6985c4f9a970dee9dbf3d461997d1b613a2fc13 Feb 03 12:35:38 crc kubenswrapper[4820]: I0203 12:35:38.432979 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:38 crc kubenswrapper[4820]: I0203 12:35:38.455256 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:39 crc kubenswrapper[4820]: I0203 12:35:39.263858 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="068081be-0cae-4b93-a5a4-cefe01fe6396" path="/var/lib/kubelet/pods/068081be-0cae-4b93-a5a4-cefe01fe6396/volumes" Feb 03 12:35:39 crc kubenswrapper[4820]: I0203 12:35:39.277581 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71aba6f9-1efc-4b39-8a61-444e7399c8e0","Type":"ContainerStarted","Data":"55a702fd39ca07a9b769d1ef97bede903b73b7933b71722c4c167ee70d06a45a"} Feb 03 12:35:39 crc kubenswrapper[4820]: I0203 12:35:39.277637 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71aba6f9-1efc-4b39-8a61-444e7399c8e0","Type":"ContainerStarted","Data":"f8dd809a6e8a2dac45ebdf4df81d0fe773c4c628d44de3dc34dda8887a5be870"} Feb 03 12:35:39 crc kubenswrapper[4820]: I0203 12:35:39.277663 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71aba6f9-1efc-4b39-8a61-444e7399c8e0","Type":"ContainerStarted","Data":"e6eb6d3468f137d12a675d63a6985c4f9a970dee9dbf3d461997d1b613a2fc13"} Feb 03 12:35:39 crc kubenswrapper[4820]: I0203 12:35:39.356736 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.356714158 podStartE2EDuration="2.356714158s" podCreationTimestamp="2026-02-03 12:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:35:39.339242104 +0000 UTC m=+1856.862317968" watchObservedRunningTime="2026-02-03 12:35:39.356714158 +0000 UTC m=+1856.879790022" Feb 03 12:35:39 crc kubenswrapper[4820]: I0203 12:35:39.449677 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:35:40 crc kubenswrapper[4820]: E0203 12:35:40.036281 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17c371f7_f032_4444_8d4b_1183a224c7b0.slice/crio-conmon-0f246c95365efface890ae91ed527bfb457eda5dc7b27fa8cda294f51a37e248.scope\": RecentStats: unable to find data in memory cache]" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.032020 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.121865 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-log-httpd\") pod \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.121947 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-config-data\") pod \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.122045 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm2sm\" (UniqueName: \"kubernetes.io/projected/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-kube-api-access-sm2sm\") pod \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.122079 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-combined-ca-bundle\") pod \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.122214 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-scripts\") pod \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.123032 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-sg-core-conf-yaml\") pod \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.123089 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-run-httpd\") pod \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\" (UID: \"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2\") " Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.123629 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" (UID: "c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.123786 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" (UID: "c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.124109 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.124132 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.129109 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-scripts" (OuterVolumeSpecName: "scripts") pod "c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" (UID: "c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.135097 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-kube-api-access-sm2sm" (OuterVolumeSpecName: "kube-api-access-sm2sm") pod "c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" (UID: "c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2"). InnerVolumeSpecName "kube-api-access-sm2sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.163522 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" (UID: "c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.225853 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm2sm\" (UniqueName: \"kubernetes.io/projected/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-kube-api-access-sm2sm\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.226206 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.226364 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.231495 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" (UID: "c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.237763 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-config-data" (OuterVolumeSpecName: "config-data") pod "c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" (UID: "c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.303619 4820 generic.go:334] "Generic (PLEG): container finished" podID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerID="7c77932df498b34e70949f89a8806eb41ff4cefb199580c1ab5f36b72e32fdcc" exitCode=0 Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.303692 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.303707 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2","Type":"ContainerDied","Data":"7c77932df498b34e70949f89a8806eb41ff4cefb199580c1ab5f36b72e32fdcc"} Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.304364 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2","Type":"ContainerDied","Data":"df5928d0729de0f38ca9fa1a78d5ad48a5e884c8178fad374367854f56d17429"} Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.304537 4820 scope.go:117] "RemoveContainer" containerID="214de90213c07a9c09353aaf0b1a81086e9cf3262bd484b3a57bd6c5054264c6" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.323115 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.329184 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.329236 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.336475 4820 scope.go:117] "RemoveContainer" containerID="ba6fe612b482c19e2704048ebd92e86fa788ef5014c0754e4238178ee0ba16a2" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.352065 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.369800 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.370794 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.375066 4820 scope.go:117] "RemoveContainer" containerID="492702291ee0f8bba54f9ffadd3bf1dc83f97d05b5e361c108fd74a5937b38bc" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.388246 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:35:41 crc kubenswrapper[4820]: E0203 12:35:41.389181 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="ceilometer-central-agent" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.389787 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="ceilometer-central-agent" Feb 03 12:35:41 crc kubenswrapper[4820]: E0203 12:35:41.389876 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="proxy-httpd" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.389995 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="proxy-httpd" Feb 03 12:35:41 crc kubenswrapper[4820]: E0203 12:35:41.390155 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="sg-core" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.390218 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="sg-core" Feb 03 12:35:41 crc kubenswrapper[4820]: E0203 12:35:41.390286 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="ceilometer-notification-agent" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.390362 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="ceilometer-notification-agent" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.390656 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="proxy-httpd" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.390744 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="ceilometer-notification-agent" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.390803 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="ceilometer-central-agent" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.390859 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" containerName="sg-core" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.393060 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.405571 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.406030 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.406297 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.406651 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.449575 4820 scope.go:117] "RemoveContainer" containerID="7c77932df498b34e70949f89a8806eb41ff4cefb199580c1ab5f36b72e32fdcc" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.483763 4820 scope.go:117] "RemoveContainer" containerID="214de90213c07a9c09353aaf0b1a81086e9cf3262bd484b3a57bd6c5054264c6" Feb 03 12:35:41 crc kubenswrapper[4820]: E0203 12:35:41.484328 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"214de90213c07a9c09353aaf0b1a81086e9cf3262bd484b3a57bd6c5054264c6\": container with ID starting with 214de90213c07a9c09353aaf0b1a81086e9cf3262bd484b3a57bd6c5054264c6 not found: ID does not exist" containerID="214de90213c07a9c09353aaf0b1a81086e9cf3262bd484b3a57bd6c5054264c6" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.484387 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"214de90213c07a9c09353aaf0b1a81086e9cf3262bd484b3a57bd6c5054264c6"} err="failed to get container status \"214de90213c07a9c09353aaf0b1a81086e9cf3262bd484b3a57bd6c5054264c6\": rpc error: code = NotFound desc = could not find container \"214de90213c07a9c09353aaf0b1a81086e9cf3262bd484b3a57bd6c5054264c6\": container with ID starting with 214de90213c07a9c09353aaf0b1a81086e9cf3262bd484b3a57bd6c5054264c6 not found: ID does not exist" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.484418 4820 scope.go:117] "RemoveContainer" containerID="ba6fe612b482c19e2704048ebd92e86fa788ef5014c0754e4238178ee0ba16a2" Feb 03 12:35:41 crc kubenswrapper[4820]: E0203 12:35:41.484791 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba6fe612b482c19e2704048ebd92e86fa788ef5014c0754e4238178ee0ba16a2\": container with ID starting with ba6fe612b482c19e2704048ebd92e86fa788ef5014c0754e4238178ee0ba16a2 not found: ID does not exist" containerID="ba6fe612b482c19e2704048ebd92e86fa788ef5014c0754e4238178ee0ba16a2" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.484836 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba6fe612b482c19e2704048ebd92e86fa788ef5014c0754e4238178ee0ba16a2"} err="failed to get container status \"ba6fe612b482c19e2704048ebd92e86fa788ef5014c0754e4238178ee0ba16a2\": rpc error: code = NotFound desc = could not find container \"ba6fe612b482c19e2704048ebd92e86fa788ef5014c0754e4238178ee0ba16a2\": container with ID starting with ba6fe612b482c19e2704048ebd92e86fa788ef5014c0754e4238178ee0ba16a2 not found: ID does not exist" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.484863 4820 scope.go:117] "RemoveContainer" containerID="492702291ee0f8bba54f9ffadd3bf1dc83f97d05b5e361c108fd74a5937b38bc" Feb 03 12:35:41 crc kubenswrapper[4820]: E0203 12:35:41.485364 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"492702291ee0f8bba54f9ffadd3bf1dc83f97d05b5e361c108fd74a5937b38bc\": container with ID starting with 492702291ee0f8bba54f9ffadd3bf1dc83f97d05b5e361c108fd74a5937b38bc not found: ID does not exist" containerID="492702291ee0f8bba54f9ffadd3bf1dc83f97d05b5e361c108fd74a5937b38bc" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.485394 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"492702291ee0f8bba54f9ffadd3bf1dc83f97d05b5e361c108fd74a5937b38bc"} err="failed to get container status \"492702291ee0f8bba54f9ffadd3bf1dc83f97d05b5e361c108fd74a5937b38bc\": rpc error: code = NotFound desc = could not find container \"492702291ee0f8bba54f9ffadd3bf1dc83f97d05b5e361c108fd74a5937b38bc\": container with ID starting with 492702291ee0f8bba54f9ffadd3bf1dc83f97d05b5e361c108fd74a5937b38bc not found: ID does not exist" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.485414 4820 scope.go:117] "RemoveContainer" containerID="7c77932df498b34e70949f89a8806eb41ff4cefb199580c1ab5f36b72e32fdcc" Feb 03 12:35:41 crc kubenswrapper[4820]: E0203 12:35:41.485767 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c77932df498b34e70949f89a8806eb41ff4cefb199580c1ab5f36b72e32fdcc\": container with ID starting with 7c77932df498b34e70949f89a8806eb41ff4cefb199580c1ab5f36b72e32fdcc not found: ID does not exist" containerID="7c77932df498b34e70949f89a8806eb41ff4cefb199580c1ab5f36b72e32fdcc" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.485825 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c77932df498b34e70949f89a8806eb41ff4cefb199580c1ab5f36b72e32fdcc"} err="failed to get container status \"7c77932df498b34e70949f89a8806eb41ff4cefb199580c1ab5f36b72e32fdcc\": rpc error: code = NotFound desc = could not find container \"7c77932df498b34e70949f89a8806eb41ff4cefb199580c1ab5f36b72e32fdcc\": container with ID starting with 7c77932df498b34e70949f89a8806eb41ff4cefb199580c1ab5f36b72e32fdcc not found: ID does not exist" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.535010 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.535139 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4d47fbc-d003-4831-81f0-e520d6a44602-run-httpd\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.535166 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-config-data\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.535282 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.535313 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-scripts\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.535388 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whj5n\" (UniqueName: \"kubernetes.io/projected/a4d47fbc-d003-4831-81f0-e520d6a44602-kube-api-access-whj5n\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.535512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.535664 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4d47fbc-d003-4831-81f0-e520d6a44602-log-httpd\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.637504 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.637636 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4d47fbc-d003-4831-81f0-e520d6a44602-log-httpd\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.637697 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.637728 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4d47fbc-d003-4831-81f0-e520d6a44602-run-httpd\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.637748 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-config-data\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.637790 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.637815 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-scripts\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.637835 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whj5n\" (UniqueName: \"kubernetes.io/projected/a4d47fbc-d003-4831-81f0-e520d6a44602-kube-api-access-whj5n\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.638315 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4d47fbc-d003-4831-81f0-e520d6a44602-log-httpd\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.638654 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4d47fbc-d003-4831-81f0-e520d6a44602-run-httpd\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.642492 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.642620 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.642993 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.643234 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-config-data\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.645563 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-scripts\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.657620 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whj5n\" (UniqueName: \"kubernetes.io/projected/a4d47fbc-d003-4831-81f0-e520d6a44602-kube-api-access-whj5n\") pod \"ceilometer-0\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " pod="openstack/ceilometer-0" Feb 03 12:35:41 crc kubenswrapper[4820]: I0203 12:35:41.779594 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:35:42 crc kubenswrapper[4820]: I0203 12:35:42.457184 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 12:35:42 crc kubenswrapper[4820]: W0203 12:35:42.463711 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4d47fbc_d003_4831_81f0_e520d6a44602.slice/crio-87d02b08c793b395765cbe147867162fe97a1224f84241481343e2f703ba703b WatchSource:0}: Error finding container 87d02b08c793b395765cbe147867162fe97a1224f84241481343e2f703ba703b: Status 404 returned error can't find the container with id 87d02b08c793b395765cbe147867162fe97a1224f84241481343e2f703ba703b Feb 03 12:35:42 crc kubenswrapper[4820]: I0203 12:35:42.466432 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:35:43 crc kubenswrapper[4820]: I0203 12:35:43.252262 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2" path="/var/lib/kubelet/pods/c1df90b5-ffb2-40d7-9c24-7f90aa8cb1a2/volumes" Feb 03 12:35:43 crc kubenswrapper[4820]: I0203 12:35:43.439251 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4d47fbc-d003-4831-81f0-e520d6a44602","Type":"ContainerStarted","Data":"87d02b08c793b395765cbe147867162fe97a1224f84241481343e2f703ba703b"} Feb 03 12:35:43 crc kubenswrapper[4820]: I0203 12:35:43.703335 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.145107 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:35:44 crc kubenswrapper[4820]: E0203 12:35:44.146426 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.418868 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-pl7pt"] Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.420435 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.426444 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.426740 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.455290 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pl7pt"] Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.469752 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pl7pt\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.469827 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-scripts\") pod \"nova-cell1-cell-mapping-pl7pt\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.469973 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-config-data\") pod \"nova-cell1-cell-mapping-pl7pt\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.470463 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt2hv\" (UniqueName: \"kubernetes.io/projected/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-kube-api-access-wt2hv\") pod \"nova-cell1-cell-mapping-pl7pt\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.488704 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4d47fbc-d003-4831-81f0-e520d6a44602","Type":"ContainerStarted","Data":"fecc5af700498b446233045a239eaead6fa4571408cf67dd2867b5c7f3424988"} Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.494647 4820 generic.go:334] "Generic (PLEG): container finished" podID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerID="76995f196246a725064eaf869384b250078a17273f52f37f37f976ac18b1ddc1" exitCode=137 Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.494710 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdc8588b4-jtjr8" event={"ID":"17c371f7-f032-4444-8d4b-1183a224c7b0","Type":"ContainerDied","Data":"76995f196246a725064eaf869384b250078a17273f52f37f37f976ac18b1ddc1"} Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.494744 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdc8588b4-jtjr8" event={"ID":"17c371f7-f032-4444-8d4b-1183a224c7b0","Type":"ContainerStarted","Data":"9bb0129c1c7f5e8bb1f63d803b792b6c1cd2c7a9cf979aa536548b3eb28e5f73"} Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.494762 4820 scope.go:117] "RemoveContainer" containerID="0f246c95365efface890ae91ed527bfb457eda5dc7b27fa8cda294f51a37e248" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.572959 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-config-data\") pod \"nova-cell1-cell-mapping-pl7pt\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.573216 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt2hv\" (UniqueName: \"kubernetes.io/projected/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-kube-api-access-wt2hv\") pod \"nova-cell1-cell-mapping-pl7pt\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.573287 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pl7pt\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.573341 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-scripts\") pod \"nova-cell1-cell-mapping-pl7pt\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.596007 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-config-data\") pod \"nova-cell1-cell-mapping-pl7pt\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.595885 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-scripts\") pod \"nova-cell1-cell-mapping-pl7pt\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.602757 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pl7pt\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.604996 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt2hv\" (UniqueName: \"kubernetes.io/projected/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-kube-api-access-wt2hv\") pod \"nova-cell1-cell-mapping-pl7pt\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:44 crc kubenswrapper[4820]: I0203 12:35:44.767508 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:45 crc kubenswrapper[4820]: I0203 12:35:45.345233 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pl7pt"] Feb 03 12:35:45 crc kubenswrapper[4820]: I0203 12:35:45.548666 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4d47fbc-d003-4831-81f0-e520d6a44602","Type":"ContainerStarted","Data":"5c8d64c2bc98ffa16d4e444310a28ee0b766fa8ba3f048ce42692fd2f6c78200"} Feb 03 12:35:45 crc kubenswrapper[4820]: I0203 12:35:45.554778 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pl7pt" event={"ID":"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34","Type":"ContainerStarted","Data":"a3edd35a86f99502eff357c2666d89942c4c01c3bfa8a770782603759d3a858b"} Feb 03 12:35:46 crc kubenswrapper[4820]: I0203 12:35:46.583553 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4d47fbc-d003-4831-81f0-e520d6a44602","Type":"ContainerStarted","Data":"2765701c58e8af789fd970a737facfa2928f9f8296cf86663a9fdc4aaa69ace2"} Feb 03 12:35:46 crc kubenswrapper[4820]: I0203 12:35:46.586617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pl7pt" event={"ID":"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34","Type":"ContainerStarted","Data":"b265cd32d381c2044cc3e1ec2d613885bb76a81ac4e65500e57126961cc3884f"} Feb 03 12:35:46 crc kubenswrapper[4820]: I0203 12:35:46.611826 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-pl7pt" podStartSLOduration=2.611798031 podStartE2EDuration="2.611798031s" podCreationTimestamp="2026-02-03 12:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:35:46.604134744 +0000 UTC m=+1864.127210608" watchObservedRunningTime="2026-02-03 12:35:46.611798031 +0000 UTC m=+1864.134873895" Feb 03 12:35:47 crc kubenswrapper[4820]: I0203 12:35:47.641624 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 12:35:47 crc kubenswrapper[4820]: I0203 12:35:47.642736 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 12:35:48 crc kubenswrapper[4820]: I0203 12:35:48.613829 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4d47fbc-d003-4831-81f0-e520d6a44602","Type":"ContainerStarted","Data":"501ee7872b55b08a6c826326153228e89f0aab9e4fb4c707b7ba0ba07f77376f"} Feb 03 12:35:48 crc kubenswrapper[4820]: I0203 12:35:48.615105 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:35:48 crc kubenswrapper[4820]: I0203 12:35:48.651014 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5119897829999998 podStartE2EDuration="7.650993213s" podCreationTimestamp="2026-02-03 12:35:41 +0000 UTC" firstStartedPulling="2026-02-03 12:35:42.469527188 +0000 UTC m=+1859.992603052" lastFinishedPulling="2026-02-03 12:35:47.608530608 +0000 UTC m=+1865.131606482" observedRunningTime="2026-02-03 12:35:48.640604231 +0000 UTC m=+1866.163680115" watchObservedRunningTime="2026-02-03 12:35:48.650993213 +0000 UTC m=+1866.174069077" Feb 03 12:35:48 crc kubenswrapper[4820]: I0203 12:35:48.727164 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="71aba6f9-1efc-4b39-8a61-444e7399c8e0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.225:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:35:48 crc kubenswrapper[4820]: I0203 12:35:48.727217 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="71aba6f9-1efc-4b39-8a61-444e7399c8e0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.225:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:35:52 crc kubenswrapper[4820]: I0203 12:35:52.674734 4820 generic.go:334] "Generic (PLEG): container finished" podID="ff7cd9cd-238d-4f55-87f8-6d4f78e93e34" containerID="b265cd32d381c2044cc3e1ec2d613885bb76a81ac4e65500e57126961cc3884f" exitCode=0 Feb 03 12:35:52 crc kubenswrapper[4820]: I0203 12:35:52.674856 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pl7pt" event={"ID":"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34","Type":"ContainerDied","Data":"b265cd32d381c2044cc3e1ec2d613885bb76a81ac4e65500e57126961cc3884f"} Feb 03 12:35:53 crc kubenswrapper[4820]: I0203 12:35:53.127604 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:35:53 crc kubenswrapper[4820]: I0203 12:35:53.127793 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.095197 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.134753 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt2hv\" (UniqueName: \"kubernetes.io/projected/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-kube-api-access-wt2hv\") pod \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.134836 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-config-data\") pod \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.135126 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-combined-ca-bundle\") pod \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.135204 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-scripts\") pod \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\" (UID: \"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34\") " Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.149221 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-scripts" (OuterVolumeSpecName: "scripts") pod "ff7cd9cd-238d-4f55-87f8-6d4f78e93e34" (UID: "ff7cd9cd-238d-4f55-87f8-6d4f78e93e34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.204206 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-kube-api-access-wt2hv" (OuterVolumeSpecName: "kube-api-access-wt2hv") pod "ff7cd9cd-238d-4f55-87f8-6d4f78e93e34" (UID: "ff7cd9cd-238d-4f55-87f8-6d4f78e93e34"). InnerVolumeSpecName "kube-api-access-wt2hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.226357 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-config-data" (OuterVolumeSpecName: "config-data") pod "ff7cd9cd-238d-4f55-87f8-6d4f78e93e34" (UID: "ff7cd9cd-238d-4f55-87f8-6d4f78e93e34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.237668 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.237714 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt2hv\" (UniqueName: \"kubernetes.io/projected/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-kube-api-access-wt2hv\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.237730 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.252241 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff7cd9cd-238d-4f55-87f8-6d4f78e93e34" (UID: "ff7cd9cd-238d-4f55-87f8-6d4f78e93e34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.341648 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.699166 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pl7pt" Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.699373 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pl7pt" event={"ID":"ff7cd9cd-238d-4f55-87f8-6d4f78e93e34","Type":"ContainerDied","Data":"a3edd35a86f99502eff357c2666d89942c4c01c3bfa8a770782603759d3a858b"} Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.699577 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3edd35a86f99502eff357c2666d89942c4c01c3bfa8a770782603759d3a858b" Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.702089 4820 generic.go:334] "Generic (PLEG): container finished" podID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerID="590da96327763a1ecf6806acf6e0287da04147012217471081285d16fb887d10" exitCode=137 Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.702120 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b4df5bdd-tdb9h" event={"ID":"308562dd-6078-4c1c-a4e0-c01a60a2d81d","Type":"ContainerDied","Data":"590da96327763a1ecf6806acf6e0287da04147012217471081285d16fb887d10"} Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.702175 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b4df5bdd-tdb9h" event={"ID":"308562dd-6078-4c1c-a4e0-c01a60a2d81d","Type":"ContainerStarted","Data":"06ce8920ee65992b1012abe79993fa4de09b6a60cae71b518ca9968eeaa799d4"} Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.702207 4820 scope.go:117] "RemoveContainer" containerID="9c7a577e87b3e83c7e349bf9ccd38e1f5613ee686a7353fa8aac276143a6016b" Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.810961 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.811227 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e5a80e13-273e-46e5-b11c-7b864bd07a08" containerName="nova-scheduler-scheduler" containerID="cri-o://599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8" gracePeriod=30 Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.826984 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.827321 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="71aba6f9-1efc-4b39-8a61-444e7399c8e0" containerName="nova-api-log" containerID="cri-o://f8dd809a6e8a2dac45ebdf4df81d0fe773c4c628d44de3dc34dda8887a5be870" gracePeriod=30 Feb 03 12:35:54 crc kubenswrapper[4820]: I0203 12:35:54.827968 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="71aba6f9-1efc-4b39-8a61-444e7399c8e0" containerName="nova-api-api" containerID="cri-o://55a702fd39ca07a9b769d1ef97bede903b73b7933b71722c4c167ee70d06a45a" gracePeriod=30 Feb 03 12:35:55 crc kubenswrapper[4820]: I0203 12:35:55.143182 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:35:55 crc kubenswrapper[4820]: E0203 12:35:55.143500 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:35:55 crc kubenswrapper[4820]: I0203 12:35:55.714882 4820 generic.go:334] "Generic (PLEG): container finished" podID="71aba6f9-1efc-4b39-8a61-444e7399c8e0" containerID="f8dd809a6e8a2dac45ebdf4df81d0fe773c4c628d44de3dc34dda8887a5be870" exitCode=143 Feb 03 12:35:55 crc kubenswrapper[4820]: I0203 12:35:55.715098 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71aba6f9-1efc-4b39-8a61-444e7399c8e0","Type":"ContainerDied","Data":"f8dd809a6e8a2dac45ebdf4df81d0fe773c4c628d44de3dc34dda8887a5be870"} Feb 03 12:35:56 crc kubenswrapper[4820]: E0203 12:35:56.324507 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8 is running failed: container process not found" containerID="599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 12:35:56 crc kubenswrapper[4820]: E0203 12:35:56.326698 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8 is running failed: container process not found" containerID="599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 12:35:56 crc kubenswrapper[4820]: E0203 12:35:56.327169 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8 is running failed: container process not found" containerID="599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 12:35:56 crc kubenswrapper[4820]: E0203 12:35:56.327264 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e5a80e13-273e-46e5-b11c-7b864bd07a08" containerName="nova-scheduler-scheduler" Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.630403 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.716328 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a80e13-273e-46e5-b11c-7b864bd07a08-combined-ca-bundle\") pod \"e5a80e13-273e-46e5-b11c-7b864bd07a08\" (UID: \"e5a80e13-273e-46e5-b11c-7b864bd07a08\") " Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.716473 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a80e13-273e-46e5-b11c-7b864bd07a08-config-data\") pod \"e5a80e13-273e-46e5-b11c-7b864bd07a08\" (UID: \"e5a80e13-273e-46e5-b11c-7b864bd07a08\") " Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.732626 4820 generic.go:334] "Generic (PLEG): container finished" podID="e5a80e13-273e-46e5-b11c-7b864bd07a08" containerID="599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8" exitCode=0 Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.732681 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5a80e13-273e-46e5-b11c-7b864bd07a08","Type":"ContainerDied","Data":"599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8"} Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.732720 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5a80e13-273e-46e5-b11c-7b864bd07a08","Type":"ContainerDied","Data":"4c8fa0ff4f5828fa07e307b53d01cc1f00256a2dedb6ac45fa572653ce635424"} Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.732714 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.732738 4820 scope.go:117] "RemoveContainer" containerID="599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8" Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.753582 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a80e13-273e-46e5-b11c-7b864bd07a08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5a80e13-273e-46e5-b11c-7b864bd07a08" (UID: "e5a80e13-273e-46e5-b11c-7b864bd07a08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.767269 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a80e13-273e-46e5-b11c-7b864bd07a08-config-data" (OuterVolumeSpecName: "config-data") pod "e5a80e13-273e-46e5-b11c-7b864bd07a08" (UID: "e5a80e13-273e-46e5-b11c-7b864bd07a08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.772299 4820 scope.go:117] "RemoveContainer" containerID="599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8" Feb 03 12:35:56 crc kubenswrapper[4820]: E0203 12:35:56.773009 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8\": container with ID starting with 599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8 not found: ID does not exist" containerID="599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8" Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.773135 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8"} err="failed to get container status \"599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8\": rpc error: code = NotFound desc = could not find container \"599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8\": container with ID starting with 599ce0b8eed1170f88fc257f9ca3a0d46de9544b38b77d5f447374e330e5d9a8 not found: ID does not exist" Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.818639 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwq7j\" (UniqueName: \"kubernetes.io/projected/e5a80e13-273e-46e5-b11c-7b864bd07a08-kube-api-access-hwq7j\") pod \"e5a80e13-273e-46e5-b11c-7b864bd07a08\" (UID: \"e5a80e13-273e-46e5-b11c-7b864bd07a08\") " Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.819344 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a80e13-273e-46e5-b11c-7b864bd07a08-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.819452 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a80e13-273e-46e5-b11c-7b864bd07a08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.821836 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a80e13-273e-46e5-b11c-7b864bd07a08-kube-api-access-hwq7j" (OuterVolumeSpecName: "kube-api-access-hwq7j") pod "e5a80e13-273e-46e5-b11c-7b864bd07a08" (UID: "e5a80e13-273e-46e5-b11c-7b864bd07a08"). InnerVolumeSpecName "kube-api-access-hwq7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:35:56 crc kubenswrapper[4820]: I0203 12:35:56.921632 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwq7j\" (UniqueName: \"kubernetes.io/projected/e5a80e13-273e-46e5-b11c-7b864bd07a08-kube-api-access-hwq7j\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.077221 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.090157 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.103030 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:35:57 crc kubenswrapper[4820]: E0203 12:35:57.103710 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a80e13-273e-46e5-b11c-7b864bd07a08" containerName="nova-scheduler-scheduler" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.103743 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a80e13-273e-46e5-b11c-7b864bd07a08" containerName="nova-scheduler-scheduler" Feb 03 12:35:57 crc kubenswrapper[4820]: E0203 12:35:57.103803 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7cd9cd-238d-4f55-87f8-6d4f78e93e34" containerName="nova-manage" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.103813 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7cd9cd-238d-4f55-87f8-6d4f78e93e34" containerName="nova-manage" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.104089 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a80e13-273e-46e5-b11c-7b864bd07a08" containerName="nova-scheduler-scheduler" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.104112 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7cd9cd-238d-4f55-87f8-6d4f78e93e34" containerName="nova-manage" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.104971 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.108316 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.113290 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.125907 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff15ab3-eace-455f-b413-0acd29aa3cb5-config-data\") pod \"nova-scheduler-0\" (UID: \"dff15ab3-eace-455f-b413-0acd29aa3cb5\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.126059 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbmq\" (UniqueName: \"kubernetes.io/projected/dff15ab3-eace-455f-b413-0acd29aa3cb5-kube-api-access-sbbmq\") pod \"nova-scheduler-0\" (UID: \"dff15ab3-eace-455f-b413-0acd29aa3cb5\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.126113 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff15ab3-eace-455f-b413-0acd29aa3cb5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dff15ab3-eace-455f-b413-0acd29aa3cb5\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.161925 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a80e13-273e-46e5-b11c-7b864bd07a08" path="/var/lib/kubelet/pods/e5a80e13-273e-46e5-b11c-7b864bd07a08/volumes" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.228187 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff15ab3-eace-455f-b413-0acd29aa3cb5-config-data\") pod \"nova-scheduler-0\" (UID: \"dff15ab3-eace-455f-b413-0acd29aa3cb5\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.228360 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbmq\" (UniqueName: \"kubernetes.io/projected/dff15ab3-eace-455f-b413-0acd29aa3cb5-kube-api-access-sbbmq\") pod \"nova-scheduler-0\" (UID: \"dff15ab3-eace-455f-b413-0acd29aa3cb5\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.228409 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff15ab3-eace-455f-b413-0acd29aa3cb5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dff15ab3-eace-455f-b413-0acd29aa3cb5\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.236023 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff15ab3-eace-455f-b413-0acd29aa3cb5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dff15ab3-eace-455f-b413-0acd29aa3cb5\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.238728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff15ab3-eace-455f-b413-0acd29aa3cb5-config-data\") pod \"nova-scheduler-0\" (UID: \"dff15ab3-eace-455f-b413-0acd29aa3cb5\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.247833 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbmq\" (UniqueName: \"kubernetes.io/projected/dff15ab3-eace-455f-b413-0acd29aa3cb5-kube-api-access-sbbmq\") pod \"nova-scheduler-0\" (UID: \"dff15ab3-eace-455f-b413-0acd29aa3cb5\") " pod="openstack/nova-scheduler-0" Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.426224 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:35:57 crc kubenswrapper[4820]: W0203 12:35:57.924715 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddff15ab3_eace_455f_b413_0acd29aa3cb5.slice/crio-bd16c129432e9a51d14e2da5d8b0d334607986ec4d2f5e3b1a36b63425f1cf46 WatchSource:0}: Error finding container bd16c129432e9a51d14e2da5d8b0d334607986ec4d2f5e3b1a36b63425f1cf46: Status 404 returned error can't find the container with id bd16c129432e9a51d14e2da5d8b0d334607986ec4d2f5e3b1a36b63425f1cf46 Feb 03 12:35:57 crc kubenswrapper[4820]: I0203 12:35:57.935648 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.431485 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.558232 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71aba6f9-1efc-4b39-8a61-444e7399c8e0-combined-ca-bundle\") pod \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.558288 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71aba6f9-1efc-4b39-8a61-444e7399c8e0-config-data\") pod \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.558412 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71aba6f9-1efc-4b39-8a61-444e7399c8e0-logs\") pod \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.558446 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwknf\" (UniqueName: \"kubernetes.io/projected/71aba6f9-1efc-4b39-8a61-444e7399c8e0-kube-api-access-rwknf\") pod \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\" (UID: \"71aba6f9-1efc-4b39-8a61-444e7399c8e0\") " Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.559653 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71aba6f9-1efc-4b39-8a61-444e7399c8e0-logs" (OuterVolumeSpecName: "logs") pod "71aba6f9-1efc-4b39-8a61-444e7399c8e0" (UID: "71aba6f9-1efc-4b39-8a61-444e7399c8e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.563161 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71aba6f9-1efc-4b39-8a61-444e7399c8e0-kube-api-access-rwknf" (OuterVolumeSpecName: "kube-api-access-rwknf") pod "71aba6f9-1efc-4b39-8a61-444e7399c8e0" (UID: "71aba6f9-1efc-4b39-8a61-444e7399c8e0"). InnerVolumeSpecName "kube-api-access-rwknf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.591098 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71aba6f9-1efc-4b39-8a61-444e7399c8e0-config-data" (OuterVolumeSpecName: "config-data") pod "71aba6f9-1efc-4b39-8a61-444e7399c8e0" (UID: "71aba6f9-1efc-4b39-8a61-444e7399c8e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.593234 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71aba6f9-1efc-4b39-8a61-444e7399c8e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71aba6f9-1efc-4b39-8a61-444e7399c8e0" (UID: "71aba6f9-1efc-4b39-8a61-444e7399c8e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.661846 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71aba6f9-1efc-4b39-8a61-444e7399c8e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.662258 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71aba6f9-1efc-4b39-8a61-444e7399c8e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.662482 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71aba6f9-1efc-4b39-8a61-444e7399c8e0-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.662567 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwknf\" (UniqueName: \"kubernetes.io/projected/71aba6f9-1efc-4b39-8a61-444e7399c8e0-kube-api-access-rwknf\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.761625 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dff15ab3-eace-455f-b413-0acd29aa3cb5","Type":"ContainerStarted","Data":"e359ca0f6b35f9e5a65ace574c6a9baabb13d159812f75e2d84ee571389162ae"} Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.761675 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dff15ab3-eace-455f-b413-0acd29aa3cb5","Type":"ContainerStarted","Data":"bd16c129432e9a51d14e2da5d8b0d334607986ec4d2f5e3b1a36b63425f1cf46"} Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.767104 4820 generic.go:334] "Generic (PLEG): container finished" podID="71aba6f9-1efc-4b39-8a61-444e7399c8e0" containerID="55a702fd39ca07a9b769d1ef97bede903b73b7933b71722c4c167ee70d06a45a" exitCode=0 Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.767158 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71aba6f9-1efc-4b39-8a61-444e7399c8e0","Type":"ContainerDied","Data":"55a702fd39ca07a9b769d1ef97bede903b73b7933b71722c4c167ee70d06a45a"} Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.767190 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71aba6f9-1efc-4b39-8a61-444e7399c8e0","Type":"ContainerDied","Data":"e6eb6d3468f137d12a675d63a6985c4f9a970dee9dbf3d461997d1b613a2fc13"} Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.767213 4820 scope.go:117] "RemoveContainer" containerID="55a702fd39ca07a9b769d1ef97bede903b73b7933b71722c4c167ee70d06a45a" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.767222 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.787221 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.787200839 podStartE2EDuration="1.787200839s" podCreationTimestamp="2026-02-03 12:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:35:58.784417044 +0000 UTC m=+1876.307492918" watchObservedRunningTime="2026-02-03 12:35:58.787200839 +0000 UTC m=+1876.310276723" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.794318 4820 scope.go:117] "RemoveContainer" containerID="f8dd809a6e8a2dac45ebdf4df81d0fe773c4c628d44de3dc34dda8887a5be870" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.829809 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.834927 4820 scope.go:117] "RemoveContainer" containerID="55a702fd39ca07a9b769d1ef97bede903b73b7933b71722c4c167ee70d06a45a" Feb 03 12:35:58 crc kubenswrapper[4820]: E0203 12:35:58.835438 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55a702fd39ca07a9b769d1ef97bede903b73b7933b71722c4c167ee70d06a45a\": container with ID starting with 55a702fd39ca07a9b769d1ef97bede903b73b7933b71722c4c167ee70d06a45a not found: ID does not exist" containerID="55a702fd39ca07a9b769d1ef97bede903b73b7933b71722c4c167ee70d06a45a" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.835509 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a702fd39ca07a9b769d1ef97bede903b73b7933b71722c4c167ee70d06a45a"} err="failed to get container status \"55a702fd39ca07a9b769d1ef97bede903b73b7933b71722c4c167ee70d06a45a\": rpc error: code = NotFound desc = could not find container \"55a702fd39ca07a9b769d1ef97bede903b73b7933b71722c4c167ee70d06a45a\": container with ID starting with 55a702fd39ca07a9b769d1ef97bede903b73b7933b71722c4c167ee70d06a45a not found: ID does not exist" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.835545 4820 scope.go:117] "RemoveContainer" containerID="f8dd809a6e8a2dac45ebdf4df81d0fe773c4c628d44de3dc34dda8887a5be870" Feb 03 12:35:58 crc kubenswrapper[4820]: E0203 12:35:58.835795 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8dd809a6e8a2dac45ebdf4df81d0fe773c4c628d44de3dc34dda8887a5be870\": container with ID starting with f8dd809a6e8a2dac45ebdf4df81d0fe773c4c628d44de3dc34dda8887a5be870 not found: ID does not exist" containerID="f8dd809a6e8a2dac45ebdf4df81d0fe773c4c628d44de3dc34dda8887a5be870" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.835822 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8dd809a6e8a2dac45ebdf4df81d0fe773c4c628d44de3dc34dda8887a5be870"} err="failed to get container status \"f8dd809a6e8a2dac45ebdf4df81d0fe773c4c628d44de3dc34dda8887a5be870\": rpc error: code = NotFound desc = could not find container \"f8dd809a6e8a2dac45ebdf4df81d0fe773c4c628d44de3dc34dda8887a5be870\": container with ID starting with f8dd809a6e8a2dac45ebdf4df81d0fe773c4c628d44de3dc34dda8887a5be870 not found: ID does not exist" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.846576 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.853407 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 12:35:58 crc kubenswrapper[4820]: E0203 12:35:58.864770 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71aba6f9-1efc-4b39-8a61-444e7399c8e0" containerName="nova-api-api" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.864801 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="71aba6f9-1efc-4b39-8a61-444e7399c8e0" containerName="nova-api-api" Feb 03 12:35:58 crc kubenswrapper[4820]: E0203 12:35:58.864828 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71aba6f9-1efc-4b39-8a61-444e7399c8e0" containerName="nova-api-log" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.864835 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="71aba6f9-1efc-4b39-8a61-444e7399c8e0" containerName="nova-api-log" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.865109 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="71aba6f9-1efc-4b39-8a61-444e7399c8e0" containerName="nova-api-api" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.865131 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="71aba6f9-1efc-4b39-8a61-444e7399c8e0" containerName="nova-api-log" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.866403 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.866508 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.870329 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.969912 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " pod="openstack/nova-api-0" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.969989 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-logs\") pod \"nova-api-0\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " pod="openstack/nova-api-0" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.970055 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgm5q\" (UniqueName: \"kubernetes.io/projected/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-kube-api-access-lgm5q\") pod \"nova-api-0\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " pod="openstack/nova-api-0" Feb 03 12:35:58 crc kubenswrapper[4820]: I0203 12:35:58.970150 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-config-data\") pod \"nova-api-0\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " pod="openstack/nova-api-0" Feb 03 12:35:59 crc kubenswrapper[4820]: I0203 12:35:59.072192 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " pod="openstack/nova-api-0" Feb 03 12:35:59 crc kubenswrapper[4820]: I0203 12:35:59.072258 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-logs\") pod \"nova-api-0\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " pod="openstack/nova-api-0" Feb 03 12:35:59 crc kubenswrapper[4820]: I0203 12:35:59.072304 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgm5q\" (UniqueName: \"kubernetes.io/projected/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-kube-api-access-lgm5q\") pod \"nova-api-0\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " pod="openstack/nova-api-0" Feb 03 12:35:59 crc kubenswrapper[4820]: I0203 12:35:59.072372 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-config-data\") pod \"nova-api-0\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " pod="openstack/nova-api-0" Feb 03 12:35:59 crc kubenswrapper[4820]: I0203 12:35:59.073341 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-logs\") pod \"nova-api-0\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " pod="openstack/nova-api-0" Feb 03 12:35:59 crc kubenswrapper[4820]: I0203 12:35:59.076722 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " pod="openstack/nova-api-0" Feb 03 12:35:59 crc kubenswrapper[4820]: I0203 12:35:59.076760 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-config-data\") pod \"nova-api-0\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " pod="openstack/nova-api-0" Feb 03 12:35:59 crc kubenswrapper[4820]: I0203 12:35:59.089643 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgm5q\" (UniqueName: \"kubernetes.io/projected/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-kube-api-access-lgm5q\") pod \"nova-api-0\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " pod="openstack/nova-api-0" Feb 03 12:35:59 crc kubenswrapper[4820]: I0203 12:35:59.156557 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71aba6f9-1efc-4b39-8a61-444e7399c8e0" path="/var/lib/kubelet/pods/71aba6f9-1efc-4b39-8a61-444e7399c8e0/volumes" Feb 03 12:35:59 crc kubenswrapper[4820]: I0203 12:35:59.190962 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:35:59 crc kubenswrapper[4820]: I0203 12:35:59.643164 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:35:59 crc kubenswrapper[4820]: I0203 12:35:59.798340 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42d6c88c-c4d6-4381-92a0-2d3e62bbad91","Type":"ContainerStarted","Data":"1780fa036efddadf860bb5c3d437ad28274ac10c7616ae73c130ab9b208f0542"} Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.816323 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42d6c88c-c4d6-4381-92a0-2d3e62bbad91","Type":"ContainerStarted","Data":"957395278bc346b44c781bb57e59da6fa64fae2f6a60c0c35d3b0d1eb75658a3"} Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.816947 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42d6c88c-c4d6-4381-92a0-2d3e62bbad91","Type":"ContainerStarted","Data":"86cb6394b490532e1d9c43f493512e8e56bba1c58db4ffd036681d7e8a1a8dde"} Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.821173 4820 generic.go:334] "Generic (PLEG): container finished" podID="be49e248-fd39-4289-8207-517fa3ec0d90" containerID="c449ebc96060118b22140617d1169269446fc93d45e0810fa81e53cb1c180aea" exitCode=137 Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.821234 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be49e248-fd39-4289-8207-517fa3ec0d90","Type":"ContainerDied","Data":"c449ebc96060118b22140617d1169269446fc93d45e0810fa81e53cb1c180aea"} Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.821272 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be49e248-fd39-4289-8207-517fa3ec0d90","Type":"ContainerDied","Data":"b60902fcab7868de02f56cb381e6d9c83d26a3ac7f9a0d7a5e39d6ce7d320ee6"} Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.821287 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b60902fcab7868de02f56cb381e6d9c83d26a3ac7f9a0d7a5e39d6ce7d320ee6" Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.821850 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.849037 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.849019283 podStartE2EDuration="2.849019283s" podCreationTimestamp="2026-02-03 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:36:00.848128299 +0000 UTC m=+1878.371204173" watchObservedRunningTime="2026-02-03 12:36:00.849019283 +0000 UTC m=+1878.372095147" Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.913291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-config-data\") pod \"be49e248-fd39-4289-8207-517fa3ec0d90\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.913446 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44g2f\" (UniqueName: \"kubernetes.io/projected/be49e248-fd39-4289-8207-517fa3ec0d90-kube-api-access-44g2f\") pod \"be49e248-fd39-4289-8207-517fa3ec0d90\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.913504 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be49e248-fd39-4289-8207-517fa3ec0d90-logs\") pod \"be49e248-fd39-4289-8207-517fa3ec0d90\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.913634 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-combined-ca-bundle\") pod \"be49e248-fd39-4289-8207-517fa3ec0d90\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.913674 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-nova-metadata-tls-certs\") pod \"be49e248-fd39-4289-8207-517fa3ec0d90\" (UID: \"be49e248-fd39-4289-8207-517fa3ec0d90\") " Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.916533 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be49e248-fd39-4289-8207-517fa3ec0d90-logs" (OuterVolumeSpecName: "logs") pod "be49e248-fd39-4289-8207-517fa3ec0d90" (UID: "be49e248-fd39-4289-8207-517fa3ec0d90"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.920224 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be49e248-fd39-4289-8207-517fa3ec0d90-kube-api-access-44g2f" (OuterVolumeSpecName: "kube-api-access-44g2f") pod "be49e248-fd39-4289-8207-517fa3ec0d90" (UID: "be49e248-fd39-4289-8207-517fa3ec0d90"). InnerVolumeSpecName "kube-api-access-44g2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.945368 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-config-data" (OuterVolumeSpecName: "config-data") pod "be49e248-fd39-4289-8207-517fa3ec0d90" (UID: "be49e248-fd39-4289-8207-517fa3ec0d90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.948267 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be49e248-fd39-4289-8207-517fa3ec0d90" (UID: "be49e248-fd39-4289-8207-517fa3ec0d90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:36:00 crc kubenswrapper[4820]: I0203 12:36:00.972939 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "be49e248-fd39-4289-8207-517fa3ec0d90" (UID: "be49e248-fd39-4289-8207-517fa3ec0d90"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.016816 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.016869 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.016909 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be49e248-fd39-4289-8207-517fa3ec0d90-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.016926 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44g2f\" (UniqueName: \"kubernetes.io/projected/be49e248-fd39-4289-8207-517fa3ec0d90-kube-api-access-44g2f\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.016953 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be49e248-fd39-4289-8207-517fa3ec0d90-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.830938 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.867814 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.892838 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.904581 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:36:01 crc kubenswrapper[4820]: E0203 12:36:01.905342 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be49e248-fd39-4289-8207-517fa3ec0d90" containerName="nova-metadata-metadata" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.905369 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="be49e248-fd39-4289-8207-517fa3ec0d90" containerName="nova-metadata-metadata" Feb 03 12:36:01 crc kubenswrapper[4820]: E0203 12:36:01.905405 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be49e248-fd39-4289-8207-517fa3ec0d90" containerName="nova-metadata-log" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.905417 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="be49e248-fd39-4289-8207-517fa3ec0d90" containerName="nova-metadata-log" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.905703 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="be49e248-fd39-4289-8207-517fa3ec0d90" containerName="nova-metadata-log" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.905757 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="be49e248-fd39-4289-8207-517fa3ec0d90" containerName="nova-metadata-metadata" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.907391 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.915523 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.916265 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 12:36:01 crc kubenswrapper[4820]: I0203 12:36:01.934333 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.038270 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2jrx\" (UniqueName: \"kubernetes.io/projected/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-kube-api-access-m2jrx\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.038342 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-logs\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.038517 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.038578 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-config-data\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.038605 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.140722 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2jrx\" (UniqueName: \"kubernetes.io/projected/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-kube-api-access-m2jrx\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.140801 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-logs\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.141155 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.141377 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-logs\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.142063 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-config-data\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.142100 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.155633 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.156694 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.160364 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-config-data\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.160958 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2jrx\" (UniqueName: \"kubernetes.io/projected/b2a1328f-2e2d-47e6-b07c-d0b70643e1aa-kube-api-access-m2jrx\") pod \"nova-metadata-0\" (UID: \"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa\") " pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.243841 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.427327 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.727832 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:36:02 crc kubenswrapper[4820]: W0203 12:36:02.728588 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2a1328f_2e2d_47e6_b07c_d0b70643e1aa.slice/crio-6e184190967630c4d2954183f89b3f9af8a9d18cda78f37e66b29e5eddde35cd WatchSource:0}: Error finding container 6e184190967630c4d2954183f89b3f9af8a9d18cda78f37e66b29e5eddde35cd: Status 404 returned error can't find the container with id 6e184190967630c4d2954183f89b3f9af8a9d18cda78f37e66b29e5eddde35cd Feb 03 12:36:02 crc kubenswrapper[4820]: I0203 12:36:02.852090 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa","Type":"ContainerStarted","Data":"6e184190967630c4d2954183f89b3f9af8a9d18cda78f37e66b29e5eddde35cd"} Feb 03 12:36:03 crc kubenswrapper[4820]: I0203 12:36:03.129830 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:36:03 crc kubenswrapper[4820]: I0203 12:36:03.181631 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be49e248-fd39-4289-8207-517fa3ec0d90" path="/var/lib/kubelet/pods/be49e248-fd39-4289-8207-517fa3ec0d90/volumes" Feb 03 12:36:03 crc kubenswrapper[4820]: I0203 12:36:03.622324 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:36:03 crc kubenswrapper[4820]: I0203 12:36:03.622720 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:36:03 crc kubenswrapper[4820]: I0203 12:36:03.626275 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 03 12:36:03 crc kubenswrapper[4820]: I0203 12:36:03.867392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa","Type":"ContainerStarted","Data":"a561a95278dcf2c95856a72d1af1a4504cc462e2c52f315c73b5a94371153e89"} Feb 03 12:36:03 crc kubenswrapper[4820]: I0203 12:36:03.867724 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2a1328f-2e2d-47e6-b07c-d0b70643e1aa","Type":"ContainerStarted","Data":"3b61a8e34ea42f3d1395d61d0488a65c9efbd5a2d1d87e81ebd423a441f4d005"} Feb 03 12:36:03 crc kubenswrapper[4820]: I0203 12:36:03.908033 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.907995814 podStartE2EDuration="2.907995814s" podCreationTimestamp="2026-02-03 12:36:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:36:03.896056311 +0000 UTC m=+1881.419132185" watchObservedRunningTime="2026-02-03 12:36:03.907995814 +0000 UTC m=+1881.431071698" Feb 03 12:36:07 crc kubenswrapper[4820]: I0203 12:36:07.244514 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 12:36:07 crc kubenswrapper[4820]: I0203 12:36:07.244902 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 12:36:07 crc kubenswrapper[4820]: I0203 12:36:07.426748 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 12:36:07 crc kubenswrapper[4820]: I0203 12:36:07.459273 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 12:36:07 crc kubenswrapper[4820]: I0203 12:36:07.947301 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 12:36:08 crc kubenswrapper[4820]: I0203 12:36:08.145123 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:36:08 crc kubenswrapper[4820]: E0203 12:36:08.145466 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:36:09 crc kubenswrapper[4820]: I0203 12:36:09.192177 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 12:36:09 crc kubenswrapper[4820]: I0203 12:36:09.192513 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 12:36:10 crc kubenswrapper[4820]: I0203 12:36:10.275269 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="42d6c88c-c4d6-4381-92a0-2d3e62bbad91" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.229:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:36:10 crc kubenswrapper[4820]: I0203 12:36:10.275724 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="42d6c88c-c4d6-4381-92a0-2d3e62bbad91" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.229:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:36:11 crc kubenswrapper[4820]: I0203 12:36:11.860477 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 12:36:12 crc kubenswrapper[4820]: I0203 12:36:12.245003 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 12:36:12 crc kubenswrapper[4820]: I0203 12:36:12.246839 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 12:36:13 crc kubenswrapper[4820]: I0203 12:36:13.253179 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b2a1328f-2e2d-47e6-b07c-d0b70643e1aa" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.230:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:36:13 crc kubenswrapper[4820]: I0203 12:36:13.259272 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b2a1328f-2e2d-47e6-b07c-d0b70643e1aa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.230:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:36:13 crc kubenswrapper[4820]: I0203 12:36:13.622028 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b4df5bdd-tdb9h" podUID="308562dd-6078-4c1c-a4e0-c01a60a2d81d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 03 12:36:17 crc kubenswrapper[4820]: I0203 12:36:17.996197 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.198284 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.199077 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.202815 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.205276 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.440546 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.448647 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.742121 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mrmgt"] Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.744737 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.762027 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mrmgt"] Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.849921 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.850017 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.850049 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss4pj\" (UniqueName: \"kubernetes.io/projected/a655153d-67ad-489c-b58e-3ddc02470bac-kube-api-access-ss4pj\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.850130 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-config\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.850192 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.850339 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.951485 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.951556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.951609 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.951633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss4pj\" (UniqueName: \"kubernetes.io/projected/a655153d-67ad-489c-b58e-3ddc02470bac-kube-api-access-ss4pj\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.951698 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-config\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.951739 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.953012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.954271 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-config\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.954345 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.954367 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.956190 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:19 crc kubenswrapper[4820]: I0203 12:36:19.998997 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss4pj\" (UniqueName: \"kubernetes.io/projected/a655153d-67ad-489c-b58e-3ddc02470bac-kube-api-access-ss4pj\") pod \"dnsmasq-dns-89c5cd4d5-mrmgt\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:20 crc kubenswrapper[4820]: I0203 12:36:20.388965 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:21 crc kubenswrapper[4820]: I0203 12:36:21.068200 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mrmgt"] Feb 03 12:36:21 crc kubenswrapper[4820]: I0203 12:36:21.486787 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" event={"ID":"a655153d-67ad-489c-b58e-3ddc02470bac","Type":"ContainerStarted","Data":"5add7ab283120cd1e1095871edb85855ad6ad74c3dadf9fd3296444c83ea2a38"} Feb 03 12:36:21 crc kubenswrapper[4820]: I0203 12:36:21.826344 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:36:22 crc kubenswrapper[4820]: I0203 12:36:22.222026 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:36:22 crc kubenswrapper[4820]: E0203 12:36:22.222667 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:36:22 crc kubenswrapper[4820]: I0203 12:36:22.411552 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 12:36:22 crc kubenswrapper[4820]: I0203 12:36:22.502501 4820 generic.go:334] "Generic (PLEG): container finished" podID="a655153d-67ad-489c-b58e-3ddc02470bac" containerID="0d42bb656e77f0983241227008ea99a60d852dc79461d69e29b37ddb3135f109" exitCode=0 Feb 03 12:36:22 crc kubenswrapper[4820]: I0203 12:36:22.502561 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" event={"ID":"a655153d-67ad-489c-b58e-3ddc02470bac","Type":"ContainerDied","Data":"0d42bb656e77f0983241227008ea99a60d852dc79461d69e29b37ddb3135f109"} Feb 03 12:36:22 crc kubenswrapper[4820]: I0203 12:36:22.555848 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 12:36:22 crc kubenswrapper[4820]: I0203 12:36:22.566421 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 12:36:23 crc kubenswrapper[4820]: I0203 12:36:23.385939 4820 scope.go:117] "RemoveContainer" containerID="a1ff861ad6ee50e7673d412707100050bf5dc95a1a5eef2f3c9d1d19ec15a594" Feb 03 12:36:23 crc kubenswrapper[4820]: I0203 12:36:23.520336 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" event={"ID":"a655153d-67ad-489c-b58e-3ddc02470bac","Type":"ContainerStarted","Data":"aee9f76e0d96596fb9dd121e63a9139a9357ed5b155c56d68998e4d62a9bf664"} Feb 03 12:36:23 crc kubenswrapper[4820]: I0203 12:36:23.521965 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:23 crc kubenswrapper[4820]: I0203 12:36:23.534634 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 12:36:23 crc kubenswrapper[4820]: I0203 12:36:23.557596 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" podStartSLOduration=4.55757116 podStartE2EDuration="4.55757116s" podCreationTimestamp="2026-02-03 12:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:36:23.545181424 +0000 UTC m=+1901.068257298" watchObservedRunningTime="2026-02-03 12:36:23.55757116 +0000 UTC m=+1901.080647024" Feb 03 12:36:24 crc kubenswrapper[4820]: I0203 12:36:24.282358 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:36:24 crc kubenswrapper[4820]: I0203 12:36:24.282931 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="42d6c88c-c4d6-4381-92a0-2d3e62bbad91" containerName="nova-api-log" containerID="cri-o://86cb6394b490532e1d9c43f493512e8e56bba1c58db4ffd036681d7e8a1a8dde" gracePeriod=30 Feb 03 12:36:24 crc kubenswrapper[4820]: I0203 12:36:24.283114 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="42d6c88c-c4d6-4381-92a0-2d3e62bbad91" containerName="nova-api-api" containerID="cri-o://957395278bc346b44c781bb57e59da6fa64fae2f6a60c0c35d3b0d1eb75658a3" gracePeriod=30 Feb 03 12:36:24 crc kubenswrapper[4820]: I0203 12:36:24.539249 4820 generic.go:334] "Generic (PLEG): container finished" podID="42d6c88c-c4d6-4381-92a0-2d3e62bbad91" containerID="86cb6394b490532e1d9c43f493512e8e56bba1c58db4ffd036681d7e8a1a8dde" exitCode=143 Feb 03 12:36:24 crc kubenswrapper[4820]: I0203 12:36:24.539327 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42d6c88c-c4d6-4381-92a0-2d3e62bbad91","Type":"ContainerDied","Data":"86cb6394b490532e1d9c43f493512e8e56bba1c58db4ffd036681d7e8a1a8dde"} Feb 03 12:36:24 crc kubenswrapper[4820]: I0203 12:36:24.551978 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:36:24 crc kubenswrapper[4820]: I0203 12:36:24.552398 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="ceilometer-central-agent" containerID="cri-o://fecc5af700498b446233045a239eaead6fa4571408cf67dd2867b5c7f3424988" gracePeriod=30 Feb 03 12:36:24 crc kubenswrapper[4820]: I0203 12:36:24.553190 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="proxy-httpd" containerID="cri-o://501ee7872b55b08a6c826326153228e89f0aab9e4fb4c707b7ba0ba07f77376f" gracePeriod=30 Feb 03 12:36:24 crc kubenswrapper[4820]: I0203 12:36:24.553323 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="ceilometer-notification-agent" containerID="cri-o://5c8d64c2bc98ffa16d4e444310a28ee0b766fa8ba3f048ce42692fd2f6c78200" gracePeriod=30 Feb 03 12:36:24 crc kubenswrapper[4820]: I0203 12:36:24.553393 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="sg-core" containerID="cri-o://2765701c58e8af789fd970a737facfa2928f9f8296cf86663a9fdc4aaa69ace2" gracePeriod=30 Feb 03 12:36:25 crc kubenswrapper[4820]: I0203 12:36:25.560585 4820 generic.go:334] "Generic (PLEG): container finished" podID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerID="501ee7872b55b08a6c826326153228e89f0aab9e4fb4c707b7ba0ba07f77376f" exitCode=0 Feb 03 12:36:25 crc kubenswrapper[4820]: I0203 12:36:25.560835 4820 generic.go:334] "Generic (PLEG): container finished" podID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerID="2765701c58e8af789fd970a737facfa2928f9f8296cf86663a9fdc4aaa69ace2" exitCode=2 Feb 03 12:36:25 crc kubenswrapper[4820]: I0203 12:36:25.560846 4820 generic.go:334] "Generic (PLEG): container finished" podID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerID="fecc5af700498b446233045a239eaead6fa4571408cf67dd2867b5c7f3424988" exitCode=0 Feb 03 12:36:25 crc kubenswrapper[4820]: I0203 12:36:25.560733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4d47fbc-d003-4831-81f0-e520d6a44602","Type":"ContainerDied","Data":"501ee7872b55b08a6c826326153228e89f0aab9e4fb4c707b7ba0ba07f77376f"} Feb 03 12:36:25 crc kubenswrapper[4820]: I0203 12:36:25.561039 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4d47fbc-d003-4831-81f0-e520d6a44602","Type":"ContainerDied","Data":"2765701c58e8af789fd970a737facfa2928f9f8296cf86663a9fdc4aaa69ace2"} Feb 03 12:36:25 crc kubenswrapper[4820]: I0203 12:36:25.561066 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4d47fbc-d003-4831-81f0-e520d6a44602","Type":"ContainerDied","Data":"fecc5af700498b446233045a239eaead6fa4571408cf67dd2867b5c7f3424988"} Feb 03 12:36:27 crc kubenswrapper[4820]: I0203 12:36:27.263259 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.381859 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.477108 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-logs\") pod \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.477345 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-combined-ca-bundle\") pod \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.477492 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-config-data\") pod \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.477531 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgm5q\" (UniqueName: \"kubernetes.io/projected/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-kube-api-access-lgm5q\") pod \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\" (UID: \"42d6c88c-c4d6-4381-92a0-2d3e62bbad91\") " Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.478423 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-logs" (OuterVolumeSpecName: "logs") pod "42d6c88c-c4d6-4381-92a0-2d3e62bbad91" (UID: "42d6c88c-c4d6-4381-92a0-2d3e62bbad91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.499436 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-kube-api-access-lgm5q" (OuterVolumeSpecName: "kube-api-access-lgm5q") pod "42d6c88c-c4d6-4381-92a0-2d3e62bbad91" (UID: "42d6c88c-c4d6-4381-92a0-2d3e62bbad91"). InnerVolumeSpecName "kube-api-access-lgm5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.514457 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-config-data" (OuterVolumeSpecName: "config-data") pod "42d6c88c-c4d6-4381-92a0-2d3e62bbad91" (UID: "42d6c88c-c4d6-4381-92a0-2d3e62bbad91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.523518 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42d6c88c-c4d6-4381-92a0-2d3e62bbad91" (UID: "42d6c88c-c4d6-4381-92a0-2d3e62bbad91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.581393 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.581442 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.581461 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.581475 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgm5q\" (UniqueName: \"kubernetes.io/projected/42d6c88c-c4d6-4381-92a0-2d3e62bbad91-kube-api-access-lgm5q\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.680750 4820 generic.go:334] "Generic (PLEG): container finished" podID="42d6c88c-c4d6-4381-92a0-2d3e62bbad91" containerID="957395278bc346b44c781bb57e59da6fa64fae2f6a60c0c35d3b0d1eb75658a3" exitCode=0 Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.680864 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.681111 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42d6c88c-c4d6-4381-92a0-2d3e62bbad91","Type":"ContainerDied","Data":"957395278bc346b44c781bb57e59da6fa64fae2f6a60c0c35d3b0d1eb75658a3"} Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.682326 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42d6c88c-c4d6-4381-92a0-2d3e62bbad91","Type":"ContainerDied","Data":"1780fa036efddadf860bb5c3d437ad28274ac10c7616ae73c130ab9b208f0542"} Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.682437 4820 scope.go:117] "RemoveContainer" containerID="957395278bc346b44c781bb57e59da6fa64fae2f6a60c0c35d3b0d1eb75658a3" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.719062 4820 scope.go:117] "RemoveContainer" containerID="86cb6394b490532e1d9c43f493512e8e56bba1c58db4ffd036681d7e8a1a8dde" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.743445 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.762211 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.767167 4820 scope.go:117] "RemoveContainer" containerID="957395278bc346b44c781bb57e59da6fa64fae2f6a60c0c35d3b0d1eb75658a3" Feb 03 12:36:28 crc kubenswrapper[4820]: E0203 12:36:28.768681 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"957395278bc346b44c781bb57e59da6fa64fae2f6a60c0c35d3b0d1eb75658a3\": container with ID starting with 957395278bc346b44c781bb57e59da6fa64fae2f6a60c0c35d3b0d1eb75658a3 not found: ID does not exist" containerID="957395278bc346b44c781bb57e59da6fa64fae2f6a60c0c35d3b0d1eb75658a3" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.768748 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"957395278bc346b44c781bb57e59da6fa64fae2f6a60c0c35d3b0d1eb75658a3"} err="failed to get container status \"957395278bc346b44c781bb57e59da6fa64fae2f6a60c0c35d3b0d1eb75658a3\": rpc error: code = NotFound desc = could not find container \"957395278bc346b44c781bb57e59da6fa64fae2f6a60c0c35d3b0d1eb75658a3\": container with ID starting with 957395278bc346b44c781bb57e59da6fa64fae2f6a60c0c35d3b0d1eb75658a3 not found: ID does not exist" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.768787 4820 scope.go:117] "RemoveContainer" containerID="86cb6394b490532e1d9c43f493512e8e56bba1c58db4ffd036681d7e8a1a8dde" Feb 03 12:36:28 crc kubenswrapper[4820]: E0203 12:36:28.769510 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86cb6394b490532e1d9c43f493512e8e56bba1c58db4ffd036681d7e8a1a8dde\": container with ID starting with 86cb6394b490532e1d9c43f493512e8e56bba1c58db4ffd036681d7e8a1a8dde not found: ID does not exist" containerID="86cb6394b490532e1d9c43f493512e8e56bba1c58db4ffd036681d7e8a1a8dde" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.769543 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86cb6394b490532e1d9c43f493512e8e56bba1c58db4ffd036681d7e8a1a8dde"} err="failed to get container status \"86cb6394b490532e1d9c43f493512e8e56bba1c58db4ffd036681d7e8a1a8dde\": rpc error: code = NotFound desc = could not find container \"86cb6394b490532e1d9c43f493512e8e56bba1c58db4ffd036681d7e8a1a8dde\": container with ID starting with 86cb6394b490532e1d9c43f493512e8e56bba1c58db4ffd036681d7e8a1a8dde not found: ID does not exist" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.778845 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 12:36:28 crc kubenswrapper[4820]: E0203 12:36:28.780236 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d6c88c-c4d6-4381-92a0-2d3e62bbad91" containerName="nova-api-api" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.780274 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d6c88c-c4d6-4381-92a0-2d3e62bbad91" containerName="nova-api-api" Feb 03 12:36:28 crc kubenswrapper[4820]: E0203 12:36:28.780318 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d6c88c-c4d6-4381-92a0-2d3e62bbad91" containerName="nova-api-log" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.780326 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d6c88c-c4d6-4381-92a0-2d3e62bbad91" containerName="nova-api-log" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.780844 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d6c88c-c4d6-4381-92a0-2d3e62bbad91" containerName="nova-api-log" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.780901 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d6c88c-c4d6-4381-92a0-2d3e62bbad91" containerName="nova-api-api" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.783216 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.785485 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26398afc-04a6-4c1f-92bf-767a938debad-internal-tls-certs\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.785550 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26398afc-04a6-4c1f-92bf-767a938debad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.785615 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26398afc-04a6-4c1f-92bf-767a938debad-config-data\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.785691 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26398afc-04a6-4c1f-92bf-767a938debad-logs\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.785721 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26398afc-04a6-4c1f-92bf-767a938debad-public-tls-certs\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.785839 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpxjq\" (UniqueName: \"kubernetes.io/projected/26398afc-04a6-4c1f-92bf-767a938debad-kube-api-access-lpxjq\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.788683 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.788856 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.789087 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 03 12:36:28 crc kubenswrapper[4820]: I0203 12:36:28.795835 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.039771 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26398afc-04a6-4c1f-92bf-767a938debad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.040179 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26398afc-04a6-4c1f-92bf-767a938debad-config-data\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.040560 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26398afc-04a6-4c1f-92bf-767a938debad-logs\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.040681 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26398afc-04a6-4c1f-92bf-767a938debad-public-tls-certs\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.041265 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpxjq\" (UniqueName: \"kubernetes.io/projected/26398afc-04a6-4c1f-92bf-767a938debad-kube-api-access-lpxjq\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.041351 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26398afc-04a6-4c1f-92bf-767a938debad-internal-tls-certs\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.042269 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26398afc-04a6-4c1f-92bf-767a938debad-logs\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.047998 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26398afc-04a6-4c1f-92bf-767a938debad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.049223 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26398afc-04a6-4c1f-92bf-767a938debad-config-data\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.053528 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26398afc-04a6-4c1f-92bf-767a938debad-public-tls-certs\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.056025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26398afc-04a6-4c1f-92bf-767a938debad-internal-tls-certs\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.064315 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpxjq\" (UniqueName: \"kubernetes.io/projected/26398afc-04a6-4c1f-92bf-767a938debad-kube-api-access-lpxjq\") pod \"nova-api-0\" (UID: \"26398afc-04a6-4c1f-92bf-767a938debad\") " pod="openstack/nova-api-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.158857 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d6c88c-c4d6-4381-92a0-2d3e62bbad91" path="/var/lib/kubelet/pods/42d6c88c-c4d6-4381-92a0-2d3e62bbad91/volumes" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.341591 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.554029 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.655128 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-config-data\") pod \"a4d47fbc-d003-4831-81f0-e520d6a44602\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.655600 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-scripts\") pod \"a4d47fbc-d003-4831-81f0-e520d6a44602\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.655670 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4d47fbc-d003-4831-81f0-e520d6a44602-run-httpd\") pod \"a4d47fbc-d003-4831-81f0-e520d6a44602\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.655721 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4d47fbc-d003-4831-81f0-e520d6a44602-log-httpd\") pod \"a4d47fbc-d003-4831-81f0-e520d6a44602\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.655750 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-sg-core-conf-yaml\") pod \"a4d47fbc-d003-4831-81f0-e520d6a44602\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.655879 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-ceilometer-tls-certs\") pod \"a4d47fbc-d003-4831-81f0-e520d6a44602\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.655953 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whj5n\" (UniqueName: \"kubernetes.io/projected/a4d47fbc-d003-4831-81f0-e520d6a44602-kube-api-access-whj5n\") pod \"a4d47fbc-d003-4831-81f0-e520d6a44602\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.655989 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-combined-ca-bundle\") pod \"a4d47fbc-d003-4831-81f0-e520d6a44602\" (UID: \"a4d47fbc-d003-4831-81f0-e520d6a44602\") " Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.656165 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d47fbc-d003-4831-81f0-e520d6a44602-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a4d47fbc-d003-4831-81f0-e520d6a44602" (UID: "a4d47fbc-d003-4831-81f0-e520d6a44602"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.656727 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4d47fbc-d003-4831-81f0-e520d6a44602-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.657600 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d47fbc-d003-4831-81f0-e520d6a44602-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a4d47fbc-d003-4831-81f0-e520d6a44602" (UID: "a4d47fbc-d003-4831-81f0-e520d6a44602"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.664181 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-scripts" (OuterVolumeSpecName: "scripts") pod "a4d47fbc-d003-4831-81f0-e520d6a44602" (UID: "a4d47fbc-d003-4831-81f0-e520d6a44602"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.667985 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d47fbc-d003-4831-81f0-e520d6a44602-kube-api-access-whj5n" (OuterVolumeSpecName: "kube-api-access-whj5n") pod "a4d47fbc-d003-4831-81f0-e520d6a44602" (UID: "a4d47fbc-d003-4831-81f0-e520d6a44602"). InnerVolumeSpecName "kube-api-access-whj5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.714144 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a4d47fbc-d003-4831-81f0-e520d6a44602" (UID: "a4d47fbc-d003-4831-81f0-e520d6a44602"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.721075 4820 generic.go:334] "Generic (PLEG): container finished" podID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerID="5c8d64c2bc98ffa16d4e444310a28ee0b766fa8ba3f048ce42692fd2f6c78200" exitCode=0 Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.721247 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.721365 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4d47fbc-d003-4831-81f0-e520d6a44602","Type":"ContainerDied","Data":"5c8d64c2bc98ffa16d4e444310a28ee0b766fa8ba3f048ce42692fd2f6c78200"} Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.721431 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4d47fbc-d003-4831-81f0-e520d6a44602","Type":"ContainerDied","Data":"87d02b08c793b395765cbe147867162fe97a1224f84241481343e2f703ba703b"} Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.721465 4820 scope.go:117] "RemoveContainer" containerID="501ee7872b55b08a6c826326153228e89f0aab9e4fb4c707b7ba0ba07f77376f" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.747398 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a4d47fbc-d003-4831-81f0-e520d6a44602" (UID: "a4d47fbc-d003-4831-81f0-e520d6a44602"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.757440 4820 scope.go:117] "RemoveContainer" containerID="2765701c58e8af789fd970a737facfa2928f9f8296cf86663a9fdc4aaa69ace2" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.759171 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4d47fbc-d003-4831-81f0-e520d6a44602-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.759195 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.759204 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.759213 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whj5n\" (UniqueName: \"kubernetes.io/projected/a4d47fbc-d003-4831-81f0-e520d6a44602-kube-api-access-whj5n\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.759222 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.782093 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4d47fbc-d003-4831-81f0-e520d6a44602" (UID: "a4d47fbc-d003-4831-81f0-e520d6a44602"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.789189 4820 scope.go:117] "RemoveContainer" containerID="5c8d64c2bc98ffa16d4e444310a28ee0b766fa8ba3f048ce42692fd2f6c78200" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.819049 4820 scope.go:117] "RemoveContainer" containerID="fecc5af700498b446233045a239eaead6fa4571408cf67dd2867b5c7f3424988" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.819214 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-config-data" (OuterVolumeSpecName: "config-data") pod "a4d47fbc-d003-4831-81f0-e520d6a44602" (UID: "a4d47fbc-d003-4831-81f0-e520d6a44602"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.854117 4820 scope.go:117] "RemoveContainer" containerID="501ee7872b55b08a6c826326153228e89f0aab9e4fb4c707b7ba0ba07f77376f" Feb 03 12:36:29 crc kubenswrapper[4820]: E0203 12:36:29.855349 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"501ee7872b55b08a6c826326153228e89f0aab9e4fb4c707b7ba0ba07f77376f\": container with ID starting with 501ee7872b55b08a6c826326153228e89f0aab9e4fb4c707b7ba0ba07f77376f not found: ID does not exist" containerID="501ee7872b55b08a6c826326153228e89f0aab9e4fb4c707b7ba0ba07f77376f" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.855406 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"501ee7872b55b08a6c826326153228e89f0aab9e4fb4c707b7ba0ba07f77376f"} err="failed to get container status \"501ee7872b55b08a6c826326153228e89f0aab9e4fb4c707b7ba0ba07f77376f\": rpc error: code = NotFound desc = could not find container \"501ee7872b55b08a6c826326153228e89f0aab9e4fb4c707b7ba0ba07f77376f\": container with ID starting with 501ee7872b55b08a6c826326153228e89f0aab9e4fb4c707b7ba0ba07f77376f not found: ID does not exist" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.855440 4820 scope.go:117] "RemoveContainer" containerID="2765701c58e8af789fd970a737facfa2928f9f8296cf86663a9fdc4aaa69ace2" Feb 03 12:36:29 crc kubenswrapper[4820]: E0203 12:36:29.856440 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2765701c58e8af789fd970a737facfa2928f9f8296cf86663a9fdc4aaa69ace2\": container with ID starting with 2765701c58e8af789fd970a737facfa2928f9f8296cf86663a9fdc4aaa69ace2 not found: ID does not exist" containerID="2765701c58e8af789fd970a737facfa2928f9f8296cf86663a9fdc4aaa69ace2" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.856473 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2765701c58e8af789fd970a737facfa2928f9f8296cf86663a9fdc4aaa69ace2"} err="failed to get container status \"2765701c58e8af789fd970a737facfa2928f9f8296cf86663a9fdc4aaa69ace2\": rpc error: code = NotFound desc = could not find container \"2765701c58e8af789fd970a737facfa2928f9f8296cf86663a9fdc4aaa69ace2\": container with ID starting with 2765701c58e8af789fd970a737facfa2928f9f8296cf86663a9fdc4aaa69ace2 not found: ID does not exist" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.856488 4820 scope.go:117] "RemoveContainer" containerID="5c8d64c2bc98ffa16d4e444310a28ee0b766fa8ba3f048ce42692fd2f6c78200" Feb 03 12:36:29 crc kubenswrapper[4820]: E0203 12:36:29.856816 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c8d64c2bc98ffa16d4e444310a28ee0b766fa8ba3f048ce42692fd2f6c78200\": container with ID starting with 5c8d64c2bc98ffa16d4e444310a28ee0b766fa8ba3f048ce42692fd2f6c78200 not found: ID does not exist" containerID="5c8d64c2bc98ffa16d4e444310a28ee0b766fa8ba3f048ce42692fd2f6c78200" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.856855 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c8d64c2bc98ffa16d4e444310a28ee0b766fa8ba3f048ce42692fd2f6c78200"} err="failed to get container status \"5c8d64c2bc98ffa16d4e444310a28ee0b766fa8ba3f048ce42692fd2f6c78200\": rpc error: code = NotFound desc = could not find container \"5c8d64c2bc98ffa16d4e444310a28ee0b766fa8ba3f048ce42692fd2f6c78200\": container with ID starting with 5c8d64c2bc98ffa16d4e444310a28ee0b766fa8ba3f048ce42692fd2f6c78200 not found: ID does not exist" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.856875 4820 scope.go:117] "RemoveContainer" containerID="fecc5af700498b446233045a239eaead6fa4571408cf67dd2867b5c7f3424988" Feb 03 12:36:29 crc kubenswrapper[4820]: E0203 12:36:29.861234 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fecc5af700498b446233045a239eaead6fa4571408cf67dd2867b5c7f3424988\": container with ID starting with fecc5af700498b446233045a239eaead6fa4571408cf67dd2867b5c7f3424988 not found: ID does not exist" containerID="fecc5af700498b446233045a239eaead6fa4571408cf67dd2867b5c7f3424988" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.861293 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fecc5af700498b446233045a239eaead6fa4571408cf67dd2867b5c7f3424988"} err="failed to get container status \"fecc5af700498b446233045a239eaead6fa4571408cf67dd2867b5c7f3424988\": rpc error: code = NotFound desc = could not find container \"fecc5af700498b446233045a239eaead6fa4571408cf67dd2867b5c7f3424988\": container with ID starting with fecc5af700498b446233045a239eaead6fa4571408cf67dd2867b5c7f3424988 not found: ID does not exist" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.862420 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.862452 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d47fbc-d003-4831-81f0-e520d6a44602-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:29 crc kubenswrapper[4820]: I0203 12:36:29.911306 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.070335 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.079815 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-68b4df5bdd-tdb9h" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.081883 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.106059 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:36:30 crc kubenswrapper[4820]: E0203 12:36:30.106659 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="proxy-httpd" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.106685 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="proxy-httpd" Feb 03 12:36:30 crc kubenswrapper[4820]: E0203 12:36:30.106709 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="ceilometer-central-agent" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.106720 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="ceilometer-central-agent" Feb 03 12:36:30 crc kubenswrapper[4820]: E0203 12:36:30.106744 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="ceilometer-notification-agent" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.106753 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="ceilometer-notification-agent" Feb 03 12:36:30 crc kubenswrapper[4820]: E0203 12:36:30.106769 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="sg-core" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.106777 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="sg-core" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.116353 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="ceilometer-notification-agent" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.116415 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="proxy-httpd" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.116438 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="sg-core" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.116459 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" containerName="ceilometer-central-agent" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.119208 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.123087 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.126707 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.126974 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.127767 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.396175 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.485552 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fdc8588b4-jtjr8"] Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.486207 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon-log" containerID="cri-o://6e25521b0d495326fa22bb05386fb22e76c170fdbbec9bdbeb0b2eb340a1829a" gracePeriod=30 Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.486760 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" containerID="cri-o://9bb0129c1c7f5e8bb1f63d803b792b6c1cd2c7a9cf979aa536548b3eb28e5f73" gracePeriod=30 Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.500582 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.500803 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlqd4\" (UniqueName: \"kubernetes.io/projected/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-kube-api-access-tlqd4\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.500862 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-log-httpd\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.501668 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-run-httpd\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.501739 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.501777 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-config-data\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.501936 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.501971 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-scripts\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.569613 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-mnlwd"] Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.569868 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" podUID="9627e225-fd7c-4d6c-bcf1-0434bfb15d22" containerName="dnsmasq-dns" containerID="cri-o://185ae363e0c2ef42e23e14dcac2896841679b18153b96a6e0ea1ecd99f11d620" gracePeriod=10 Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.603784 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.603970 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlqd4\" (UniqueName: \"kubernetes.io/projected/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-kube-api-access-tlqd4\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.604025 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-log-httpd\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.604083 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-run-httpd\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.604130 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.604168 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-config-data\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.606198 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-run-httpd\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.606419 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.606454 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-scripts\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.610726 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-log-httpd\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.614917 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-config-data\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.615738 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.616292 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.620155 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-scripts\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.631319 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.634152 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlqd4\" (UniqueName: \"kubernetes.io/projected/fcf87510-64cf-492b-bd2c-560f6ddc0ee2-kube-api-access-tlqd4\") pod \"ceilometer-0\" (UID: \"fcf87510-64cf-492b-bd2c-560f6ddc0ee2\") " pod="openstack/ceilometer-0" Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.738909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26398afc-04a6-4c1f-92bf-767a938debad","Type":"ContainerStarted","Data":"8163217afa6859015d5d5e1834991992ad454d26264f016192e31f82acfc1146"} Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.738960 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26398afc-04a6-4c1f-92bf-767a938debad","Type":"ContainerStarted","Data":"e39b43d0ae6d2ca5ce7feba7d84ce26b8b86e10cba7e619e213b6a0d46a77f3e"} Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.755247 4820 generic.go:334] "Generic (PLEG): container finished" podID="9627e225-fd7c-4d6c-bcf1-0434bfb15d22" containerID="185ae363e0c2ef42e23e14dcac2896841679b18153b96a6e0ea1ecd99f11d620" exitCode=0 Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.755366 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" event={"ID":"9627e225-fd7c-4d6c-bcf1-0434bfb15d22","Type":"ContainerDied","Data":"185ae363e0c2ef42e23e14dcac2896841679b18153b96a6e0ea1ecd99f11d620"} Feb 03 12:36:30 crc kubenswrapper[4820]: I0203 12:36:30.906181 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.526698 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.581044 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4d47fbc-d003-4831-81f0-e520d6a44602" path="/var/lib/kubelet/pods/a4d47fbc-d003-4831-81f0-e520d6a44602/volumes" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.607675 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkqzc\" (UniqueName: \"kubernetes.io/projected/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-kube-api-access-xkqzc\") pod \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.607766 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-ovsdbserver-sb\") pod \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.607878 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-ovsdbserver-nb\") pod \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.608045 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-dns-swift-storage-0\") pod \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.608163 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-dns-svc\") pod \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.608226 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-config\") pod \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\" (UID: \"9627e225-fd7c-4d6c-bcf1-0434bfb15d22\") " Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.618448 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-kube-api-access-xkqzc" (OuterVolumeSpecName: "kube-api-access-xkqzc") pod "9627e225-fd7c-4d6c-bcf1-0434bfb15d22" (UID: "9627e225-fd7c-4d6c-bcf1-0434bfb15d22"). InnerVolumeSpecName "kube-api-access-xkqzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.673941 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9627e225-fd7c-4d6c-bcf1-0434bfb15d22" (UID: "9627e225-fd7c-4d6c-bcf1-0434bfb15d22"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.682419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9627e225-fd7c-4d6c-bcf1-0434bfb15d22" (UID: "9627e225-fd7c-4d6c-bcf1-0434bfb15d22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.689571 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9627e225-fd7c-4d6c-bcf1-0434bfb15d22" (UID: "9627e225-fd7c-4d6c-bcf1-0434bfb15d22"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.696034 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9627e225-fd7c-4d6c-bcf1-0434bfb15d22" (UID: "9627e225-fd7c-4d6c-bcf1-0434bfb15d22"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.706015 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-config" (OuterVolumeSpecName: "config") pod "9627e225-fd7c-4d6c-bcf1-0434bfb15d22" (UID: "9627e225-fd7c-4d6c-bcf1-0434bfb15d22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.711215 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.711259 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.711276 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkqzc\" (UniqueName: \"kubernetes.io/projected/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-kube-api-access-xkqzc\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.711293 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.711307 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.711319 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9627e225-fd7c-4d6c-bcf1-0434bfb15d22-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.785420 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26398afc-04a6-4c1f-92bf-767a938debad","Type":"ContainerStarted","Data":"21efd36c237e1b51d04c0ac43810036840423102ef015dc7d8362aa0885a3cb4"} Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.790940 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" event={"ID":"9627e225-fd7c-4d6c-bcf1-0434bfb15d22","Type":"ContainerDied","Data":"e0251408713a3bee30fe6b97684f56007bcba7e0395d139db4f8edff75e29c4d"} Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.791019 4820 scope.go:117] "RemoveContainer" containerID="185ae363e0c2ef42e23e14dcac2896841679b18153b96a6e0ea1ecd99f11d620" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.791126 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-mnlwd" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.857294 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.857254827 podStartE2EDuration="3.857254827s" podCreationTimestamp="2026-02-03 12:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:36:31.83042199 +0000 UTC m=+1909.353497864" watchObservedRunningTime="2026-02-03 12:36:31.857254827 +0000 UTC m=+1909.380330691" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.861990 4820 scope.go:117] "RemoveContainer" containerID="d3758e50d1f73ba436300e3091bc9d12790c408d343dc696e65a5675af65f800" Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.892962 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.908850 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-mnlwd"] Feb 03 12:36:31 crc kubenswrapper[4820]: I0203 12:36:31.932143 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-mnlwd"] Feb 03 12:36:32 crc kubenswrapper[4820]: I0203 12:36:32.813367 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcf87510-64cf-492b-bd2c-560f6ddc0ee2","Type":"ContainerStarted","Data":"cc8b798d38b03f98b46525bb4860c756b987cc680100a6cf60da10dedafead49"} Feb 03 12:36:33 crc kubenswrapper[4820]: I0203 12:36:33.157642 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9627e225-fd7c-4d6c-bcf1-0434bfb15d22" path="/var/lib/kubelet/pods/9627e225-fd7c-4d6c-bcf1-0434bfb15d22/volumes" Feb 03 12:36:33 crc kubenswrapper[4820]: I0203 12:36:33.835650 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcf87510-64cf-492b-bd2c-560f6ddc0ee2","Type":"ContainerStarted","Data":"3909f27d57daf3f1761785d7f939dbc88895e9d904c4cd7acdb213f1d0f80f85"} Feb 03 12:36:33 crc kubenswrapper[4820]: I0203 12:36:33.896739 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:49138->10.217.0.165:8443: read: connection reset by peer" Feb 03 12:36:34 crc kubenswrapper[4820]: I0203 12:36:34.861619 4820 generic.go:334] "Generic (PLEG): container finished" podID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerID="9bb0129c1c7f5e8bb1f63d803b792b6c1cd2c7a9cf979aa536548b3eb28e5f73" exitCode=0 Feb 03 12:36:34 crc kubenswrapper[4820]: I0203 12:36:34.861708 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdc8588b4-jtjr8" event={"ID":"17c371f7-f032-4444-8d4b-1183a224c7b0","Type":"ContainerDied","Data":"9bb0129c1c7f5e8bb1f63d803b792b6c1cd2c7a9cf979aa536548b3eb28e5f73"} Feb 03 12:36:34 crc kubenswrapper[4820]: I0203 12:36:34.863326 4820 scope.go:117] "RemoveContainer" containerID="76995f196246a725064eaf869384b250078a17273f52f37f37f976ac18b1ddc1" Feb 03 12:36:34 crc kubenswrapper[4820]: I0203 12:36:34.877356 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcf87510-64cf-492b-bd2c-560f6ddc0ee2","Type":"ContainerStarted","Data":"b8f329b2a5c79e4d2ac10930fd24944c0bea3b12e8d0792a23a7e7cf013733ea"} Feb 03 12:36:35 crc kubenswrapper[4820]: I0203 12:36:35.852789 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h27kw"] Feb 03 12:36:35 crc kubenswrapper[4820]: E0203 12:36:35.853810 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9627e225-fd7c-4d6c-bcf1-0434bfb15d22" containerName="dnsmasq-dns" Feb 03 12:36:35 crc kubenswrapper[4820]: I0203 12:36:35.853847 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9627e225-fd7c-4d6c-bcf1-0434bfb15d22" containerName="dnsmasq-dns" Feb 03 12:36:35 crc kubenswrapper[4820]: E0203 12:36:35.853913 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9627e225-fd7c-4d6c-bcf1-0434bfb15d22" containerName="init" Feb 03 12:36:35 crc kubenswrapper[4820]: I0203 12:36:35.853925 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9627e225-fd7c-4d6c-bcf1-0434bfb15d22" containerName="init" Feb 03 12:36:35 crc kubenswrapper[4820]: I0203 12:36:35.855632 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9627e225-fd7c-4d6c-bcf1-0434bfb15d22" containerName="dnsmasq-dns" Feb 03 12:36:35 crc kubenswrapper[4820]: I0203 12:36:35.860090 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:35 crc kubenswrapper[4820]: I0203 12:36:35.872068 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-catalog-content\") pod \"redhat-marketplace-h27kw\" (UID: \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\") " pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:35 crc kubenswrapper[4820]: I0203 12:36:35.872122 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b54c\" (UniqueName: \"kubernetes.io/projected/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-kube-api-access-2b54c\") pod \"redhat-marketplace-h27kw\" (UID: \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\") " pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:35 crc kubenswrapper[4820]: I0203 12:36:35.872803 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-utilities\") pod \"redhat-marketplace-h27kw\" (UID: \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\") " pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:35 crc kubenswrapper[4820]: I0203 12:36:35.876290 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h27kw"] Feb 03 12:36:35 crc kubenswrapper[4820]: I0203 12:36:35.977465 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-utilities\") pod \"redhat-marketplace-h27kw\" (UID: \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\") " pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:35 crc kubenswrapper[4820]: I0203 12:36:35.977789 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-catalog-content\") pod \"redhat-marketplace-h27kw\" (UID: \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\") " pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:35 crc kubenswrapper[4820]: I0203 12:36:35.977830 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b54c\" (UniqueName: \"kubernetes.io/projected/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-kube-api-access-2b54c\") pod \"redhat-marketplace-h27kw\" (UID: \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\") " pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:35 crc kubenswrapper[4820]: I0203 12:36:35.979523 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-catalog-content\") pod \"redhat-marketplace-h27kw\" (UID: \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\") " pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:35 crc kubenswrapper[4820]: I0203 12:36:35.994919 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-utilities\") pod \"redhat-marketplace-h27kw\" (UID: \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\") " pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:36 crc kubenswrapper[4820]: I0203 12:36:36.016829 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b54c\" (UniqueName: \"kubernetes.io/projected/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-kube-api-access-2b54c\") pod \"redhat-marketplace-h27kw\" (UID: \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\") " pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:36 crc kubenswrapper[4820]: I0203 12:36:36.144275 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:36:36 crc kubenswrapper[4820]: I0203 12:36:36.539702 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:37 crc kubenswrapper[4820]: I0203 12:36:37.680249 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h27kw"] Feb 03 12:36:37 crc kubenswrapper[4820]: I0203 12:36:37.970830 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"8f34688920b0d8f1ba8313bfd5660e1745625b1ce2d5d457facdc7ba2bbd910d"} Feb 03 12:36:37 crc kubenswrapper[4820]: I0203 12:36:37.974299 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h27kw" event={"ID":"f9cb4d29-0ae3-4944-be74-dde6d01a37a7","Type":"ContainerStarted","Data":"5aeaa59907977c8a1b6b3b77092f89be158605b2db642a4b71fc6f07192c015d"} Feb 03 12:36:39 crc kubenswrapper[4820]: I0203 12:36:39.006397 4820 generic.go:334] "Generic (PLEG): container finished" podID="f9cb4d29-0ae3-4944-be74-dde6d01a37a7" containerID="1825996d7f9b43779cc31881300e0e1fed0111b416fe7a32829f27c44a2c5215" exitCode=0 Feb 03 12:36:39 crc kubenswrapper[4820]: I0203 12:36:39.011081 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h27kw" event={"ID":"f9cb4d29-0ae3-4944-be74-dde6d01a37a7","Type":"ContainerDied","Data":"1825996d7f9b43779cc31881300e0e1fed0111b416fe7a32829f27c44a2c5215"} Feb 03 12:36:39 crc kubenswrapper[4820]: I0203 12:36:39.342980 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 12:36:39 crc kubenswrapper[4820]: I0203 12:36:39.344308 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 12:36:40 crc kubenswrapper[4820]: I0203 12:36:40.356266 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="26398afc-04a6-4c1f-92bf-767a938debad" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.232:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:36:40 crc kubenswrapper[4820]: I0203 12:36:40.356266 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="26398afc-04a6-4c1f-92bf-767a938debad" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.232:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:36:43 crc kubenswrapper[4820]: I0203 12:36:43.132182 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:36:44 crc kubenswrapper[4820]: I0203 12:36:44.183122 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcf87510-64cf-492b-bd2c-560f6ddc0ee2","Type":"ContainerStarted","Data":"b86c5d49f9135b130bd5873fa624e9a81ac7b4a6d82dcf616c1c5cf07dffd027"} Feb 03 12:36:44 crc kubenswrapper[4820]: I0203 12:36:44.187437 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h27kw" event={"ID":"f9cb4d29-0ae3-4944-be74-dde6d01a37a7","Type":"ContainerStarted","Data":"ed996dae76991db7772cfa6b3ca6a4bb6bdd6d31b39b909aa5519b7badfaa3af"} Feb 03 12:36:45 crc kubenswrapper[4820]: I0203 12:36:45.201654 4820 generic.go:334] "Generic (PLEG): container finished" podID="f9cb4d29-0ae3-4944-be74-dde6d01a37a7" containerID="ed996dae76991db7772cfa6b3ca6a4bb6bdd6d31b39b909aa5519b7badfaa3af" exitCode=0 Feb 03 12:36:45 crc kubenswrapper[4820]: I0203 12:36:45.201705 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h27kw" event={"ID":"f9cb4d29-0ae3-4944-be74-dde6d01a37a7","Type":"ContainerDied","Data":"ed996dae76991db7772cfa6b3ca6a4bb6bdd6d31b39b909aa5519b7badfaa3af"} Feb 03 12:36:47 crc kubenswrapper[4820]: I0203 12:36:47.645516 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h27kw" event={"ID":"f9cb4d29-0ae3-4944-be74-dde6d01a37a7","Type":"ContainerStarted","Data":"4460b1393c82cc8fd73ee46d6a4be7eeb84a7252b59f9f0afa82b5e04a9cbfde"} Feb 03 12:36:47 crc kubenswrapper[4820]: I0203 12:36:47.655266 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcf87510-64cf-492b-bd2c-560f6ddc0ee2","Type":"ContainerStarted","Data":"e6ebf3b45781f0375108776db9a425fa7e51fe22020ac24666ae499976bdf625"} Feb 03 12:36:47 crc kubenswrapper[4820]: I0203 12:36:47.655966 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:36:47 crc kubenswrapper[4820]: I0203 12:36:47.703129 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h27kw" podStartSLOduration=5.6474691230000005 podStartE2EDuration="12.703102793s" podCreationTimestamp="2026-02-03 12:36:35 +0000 UTC" firstStartedPulling="2026-02-03 12:36:39.015158287 +0000 UTC m=+1916.538234151" lastFinishedPulling="2026-02-03 12:36:46.070791957 +0000 UTC m=+1923.593867821" observedRunningTime="2026-02-03 12:36:47.677565831 +0000 UTC m=+1925.200641715" watchObservedRunningTime="2026-02-03 12:36:47.703102793 +0000 UTC m=+1925.226178667" Feb 03 12:36:47 crc kubenswrapper[4820]: I0203 12:36:47.732058 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.069508103 podStartE2EDuration="17.732030727s" podCreationTimestamp="2026-02-03 12:36:30 +0000 UTC" firstStartedPulling="2026-02-03 12:36:31.904568809 +0000 UTC m=+1909.427644673" lastFinishedPulling="2026-02-03 12:36:46.567091433 +0000 UTC m=+1924.090167297" observedRunningTime="2026-02-03 12:36:47.724445461 +0000 UTC m=+1925.247521345" watchObservedRunningTime="2026-02-03 12:36:47.732030727 +0000 UTC m=+1925.255106591" Feb 03 12:36:49 crc kubenswrapper[4820]: I0203 12:36:49.531854 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 12:36:49 crc kubenswrapper[4820]: I0203 12:36:49.532767 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 12:36:49 crc kubenswrapper[4820]: I0203 12:36:49.537790 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 12:36:49 crc kubenswrapper[4820]: I0203 12:36:49.543304 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 12:36:49 crc kubenswrapper[4820]: I0203 12:36:49.678334 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 12:36:49 crc kubenswrapper[4820]: I0203 12:36:49.687921 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 12:36:53 crc kubenswrapper[4820]: I0203 12:36:53.128198 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fdc8588b4-jtjr8" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 03 12:36:53 crc kubenswrapper[4820]: I0203 12:36:53.128735 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:36:56 crc kubenswrapper[4820]: I0203 12:36:56.561654 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:56 crc kubenswrapper[4820]: I0203 12:36:56.562395 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:56 crc kubenswrapper[4820]: I0203 12:36:56.627716 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:56 crc kubenswrapper[4820]: I0203 12:36:56.817378 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:56 crc kubenswrapper[4820]: I0203 12:36:56.882524 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h27kw"] Feb 03 12:36:58 crc kubenswrapper[4820]: I0203 12:36:58.785203 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h27kw" podUID="f9cb4d29-0ae3-4944-be74-dde6d01a37a7" containerName="registry-server" containerID="cri-o://4460b1393c82cc8fd73ee46d6a4be7eeb84a7252b59f9f0afa82b5e04a9cbfde" gracePeriod=2 Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.352479 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.443245 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b54c\" (UniqueName: \"kubernetes.io/projected/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-kube-api-access-2b54c\") pod \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\" (UID: \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\") " Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.443420 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-utilities\") pod \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\" (UID: \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\") " Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.443571 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-catalog-content\") pod \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\" (UID: \"f9cb4d29-0ae3-4944-be74-dde6d01a37a7\") " Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.445036 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-utilities" (OuterVolumeSpecName: "utilities") pod "f9cb4d29-0ae3-4944-be74-dde6d01a37a7" (UID: "f9cb4d29-0ae3-4944-be74-dde6d01a37a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.453063 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-kube-api-access-2b54c" (OuterVolumeSpecName: "kube-api-access-2b54c") pod "f9cb4d29-0ae3-4944-be74-dde6d01a37a7" (UID: "f9cb4d29-0ae3-4944-be74-dde6d01a37a7"). InnerVolumeSpecName "kube-api-access-2b54c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.471451 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9cb4d29-0ae3-4944-be74-dde6d01a37a7" (UID: "f9cb4d29-0ae3-4944-be74-dde6d01a37a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.550859 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.551038 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b54c\" (UniqueName: \"kubernetes.io/projected/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-kube-api-access-2b54c\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.551061 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9cb4d29-0ae3-4944-be74-dde6d01a37a7-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.803859 4820 generic.go:334] "Generic (PLEG): container finished" podID="f9cb4d29-0ae3-4944-be74-dde6d01a37a7" containerID="4460b1393c82cc8fd73ee46d6a4be7eeb84a7252b59f9f0afa82b5e04a9cbfde" exitCode=0 Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.803928 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h27kw" event={"ID":"f9cb4d29-0ae3-4944-be74-dde6d01a37a7","Type":"ContainerDied","Data":"4460b1393c82cc8fd73ee46d6a4be7eeb84a7252b59f9f0afa82b5e04a9cbfde"} Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.803979 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h27kw" event={"ID":"f9cb4d29-0ae3-4944-be74-dde6d01a37a7","Type":"ContainerDied","Data":"5aeaa59907977c8a1b6b3b77092f89be158605b2db642a4b71fc6f07192c015d"} Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.803999 4820 scope.go:117] "RemoveContainer" containerID="4460b1393c82cc8fd73ee46d6a4be7eeb84a7252b59f9f0afa82b5e04a9cbfde" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.804229 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h27kw" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.844356 4820 scope.go:117] "RemoveContainer" containerID="ed996dae76991db7772cfa6b3ca6a4bb6bdd6d31b39b909aa5519b7badfaa3af" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.852663 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h27kw"] Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.872777 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h27kw"] Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.877324 4820 scope.go:117] "RemoveContainer" containerID="1825996d7f9b43779cc31881300e0e1fed0111b416fe7a32829f27c44a2c5215" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.940578 4820 scope.go:117] "RemoveContainer" containerID="4460b1393c82cc8fd73ee46d6a4be7eeb84a7252b59f9f0afa82b5e04a9cbfde" Feb 03 12:36:59 crc kubenswrapper[4820]: E0203 12:36:59.941395 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4460b1393c82cc8fd73ee46d6a4be7eeb84a7252b59f9f0afa82b5e04a9cbfde\": container with ID starting with 4460b1393c82cc8fd73ee46d6a4be7eeb84a7252b59f9f0afa82b5e04a9cbfde not found: ID does not exist" containerID="4460b1393c82cc8fd73ee46d6a4be7eeb84a7252b59f9f0afa82b5e04a9cbfde" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.941475 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4460b1393c82cc8fd73ee46d6a4be7eeb84a7252b59f9f0afa82b5e04a9cbfde"} err="failed to get container status \"4460b1393c82cc8fd73ee46d6a4be7eeb84a7252b59f9f0afa82b5e04a9cbfde\": rpc error: code = NotFound desc = could not find container \"4460b1393c82cc8fd73ee46d6a4be7eeb84a7252b59f9f0afa82b5e04a9cbfde\": container with ID starting with 4460b1393c82cc8fd73ee46d6a4be7eeb84a7252b59f9f0afa82b5e04a9cbfde not found: ID does not exist" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.941531 4820 scope.go:117] "RemoveContainer" containerID="ed996dae76991db7772cfa6b3ca6a4bb6bdd6d31b39b909aa5519b7badfaa3af" Feb 03 12:36:59 crc kubenswrapper[4820]: E0203 12:36:59.942301 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed996dae76991db7772cfa6b3ca6a4bb6bdd6d31b39b909aa5519b7badfaa3af\": container with ID starting with ed996dae76991db7772cfa6b3ca6a4bb6bdd6d31b39b909aa5519b7badfaa3af not found: ID does not exist" containerID="ed996dae76991db7772cfa6b3ca6a4bb6bdd6d31b39b909aa5519b7badfaa3af" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.942330 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed996dae76991db7772cfa6b3ca6a4bb6bdd6d31b39b909aa5519b7badfaa3af"} err="failed to get container status \"ed996dae76991db7772cfa6b3ca6a4bb6bdd6d31b39b909aa5519b7badfaa3af\": rpc error: code = NotFound desc = could not find container \"ed996dae76991db7772cfa6b3ca6a4bb6bdd6d31b39b909aa5519b7badfaa3af\": container with ID starting with ed996dae76991db7772cfa6b3ca6a4bb6bdd6d31b39b909aa5519b7badfaa3af not found: ID does not exist" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.942353 4820 scope.go:117] "RemoveContainer" containerID="1825996d7f9b43779cc31881300e0e1fed0111b416fe7a32829f27c44a2c5215" Feb 03 12:36:59 crc kubenswrapper[4820]: E0203 12:36:59.942694 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1825996d7f9b43779cc31881300e0e1fed0111b416fe7a32829f27c44a2c5215\": container with ID starting with 1825996d7f9b43779cc31881300e0e1fed0111b416fe7a32829f27c44a2c5215 not found: ID does not exist" containerID="1825996d7f9b43779cc31881300e0e1fed0111b416fe7a32829f27c44a2c5215" Feb 03 12:36:59 crc kubenswrapper[4820]: I0203 12:36:59.942721 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1825996d7f9b43779cc31881300e0e1fed0111b416fe7a32829f27c44a2c5215"} err="failed to get container status \"1825996d7f9b43779cc31881300e0e1fed0111b416fe7a32829f27c44a2c5215\": rpc error: code = NotFound desc = could not find container \"1825996d7f9b43779cc31881300e0e1fed0111b416fe7a32829f27c44a2c5215\": container with ID starting with 1825996d7f9b43779cc31881300e0e1fed0111b416fe7a32829f27c44a2c5215 not found: ID does not exist" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.170675 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9cb4d29-0ae3-4944-be74-dde6d01a37a7" path="/var/lib/kubelet/pods/f9cb4d29-0ae3-4944-be74-dde6d01a37a7/volumes" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.180655 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.198826 4820 generic.go:334] "Generic (PLEG): container finished" podID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerID="6e25521b0d495326fa22bb05386fb22e76c170fdbbec9bdbeb0b2eb340a1829a" exitCode=137 Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.199007 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdc8588b4-jtjr8" event={"ID":"17c371f7-f032-4444-8d4b-1183a224c7b0","Type":"ContainerDied","Data":"6e25521b0d495326fa22bb05386fb22e76c170fdbbec9bdbeb0b2eb340a1829a"} Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.334361 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.443179 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td9pm\" (UniqueName: \"kubernetes.io/projected/17c371f7-f032-4444-8d4b-1183a224c7b0-kube-api-access-td9pm\") pod \"17c371f7-f032-4444-8d4b-1183a224c7b0\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.443754 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c371f7-f032-4444-8d4b-1183a224c7b0-logs\") pod \"17c371f7-f032-4444-8d4b-1183a224c7b0\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.443862 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-horizon-secret-key\") pod \"17c371f7-f032-4444-8d4b-1183a224c7b0\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.443967 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c371f7-f032-4444-8d4b-1183a224c7b0-scripts\") pod \"17c371f7-f032-4444-8d4b-1183a224c7b0\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.444053 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-combined-ca-bundle\") pod \"17c371f7-f032-4444-8d4b-1183a224c7b0\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.444178 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17c371f7-f032-4444-8d4b-1183a224c7b0-config-data\") pod \"17c371f7-f032-4444-8d4b-1183a224c7b0\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.444210 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-horizon-tls-certs\") pod \"17c371f7-f032-4444-8d4b-1183a224c7b0\" (UID: \"17c371f7-f032-4444-8d4b-1183a224c7b0\") " Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.446719 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17c371f7-f032-4444-8d4b-1183a224c7b0-logs" (OuterVolumeSpecName: "logs") pod "17c371f7-f032-4444-8d4b-1183a224c7b0" (UID: "17c371f7-f032-4444-8d4b-1183a224c7b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.453101 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c371f7-f032-4444-8d4b-1183a224c7b0-kube-api-access-td9pm" (OuterVolumeSpecName: "kube-api-access-td9pm") pod "17c371f7-f032-4444-8d4b-1183a224c7b0" (UID: "17c371f7-f032-4444-8d4b-1183a224c7b0"). InnerVolumeSpecName "kube-api-access-td9pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.453686 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "17c371f7-f032-4444-8d4b-1183a224c7b0" (UID: "17c371f7-f032-4444-8d4b-1183a224c7b0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.476292 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17c371f7-f032-4444-8d4b-1183a224c7b0-scripts" (OuterVolumeSpecName: "scripts") pod "17c371f7-f032-4444-8d4b-1183a224c7b0" (UID: "17c371f7-f032-4444-8d4b-1183a224c7b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.480958 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17c371f7-f032-4444-8d4b-1183a224c7b0-config-data" (OuterVolumeSpecName: "config-data") pod "17c371f7-f032-4444-8d4b-1183a224c7b0" (UID: "17c371f7-f032-4444-8d4b-1183a224c7b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.501942 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17c371f7-f032-4444-8d4b-1183a224c7b0" (UID: "17c371f7-f032-4444-8d4b-1183a224c7b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.514136 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "17c371f7-f032-4444-8d4b-1183a224c7b0" (UID: "17c371f7-f032-4444-8d4b-1183a224c7b0"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.547980 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td9pm\" (UniqueName: \"kubernetes.io/projected/17c371f7-f032-4444-8d4b-1183a224c7b0-kube-api-access-td9pm\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.548021 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c371f7-f032-4444-8d4b-1183a224c7b0-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.548034 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.548042 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c371f7-f032-4444-8d4b-1183a224c7b0-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.548051 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.548061 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17c371f7-f032-4444-8d4b-1183a224c7b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:01 crc kubenswrapper[4820]: I0203 12:37:01.548071 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c371f7-f032-4444-8d4b-1183a224c7b0-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:02 crc kubenswrapper[4820]: I0203 12:37:02.217253 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdc8588b4-jtjr8" event={"ID":"17c371f7-f032-4444-8d4b-1183a224c7b0","Type":"ContainerDied","Data":"151bf3ee50bda7e46e0b38adbbd029a641e063485a5895b000d842c8672d576d"} Feb 03 12:37:02 crc kubenswrapper[4820]: I0203 12:37:02.217313 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fdc8588b4-jtjr8" Feb 03 12:37:02 crc kubenswrapper[4820]: I0203 12:37:02.219050 4820 scope.go:117] "RemoveContainer" containerID="9bb0129c1c7f5e8bb1f63d803b792b6c1cd2c7a9cf979aa536548b3eb28e5f73" Feb 03 12:37:02 crc kubenswrapper[4820]: I0203 12:37:02.271278 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fdc8588b4-jtjr8"] Feb 03 12:37:02 crc kubenswrapper[4820]: I0203 12:37:02.283003 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5fdc8588b4-jtjr8"] Feb 03 12:37:02 crc kubenswrapper[4820]: I0203 12:37:02.414755 4820 scope.go:117] "RemoveContainer" containerID="6e25521b0d495326fa22bb05386fb22e76c170fdbbec9bdbeb0b2eb340a1829a" Feb 03 12:37:03 crc kubenswrapper[4820]: I0203 12:37:03.157424 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" path="/var/lib/kubelet/pods/17c371f7-f032-4444-8d4b-1183a224c7b0/volumes" Feb 03 12:37:12 crc kubenswrapper[4820]: I0203 12:37:12.219771 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:37:13 crc kubenswrapper[4820]: I0203 12:37:13.808642 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:37:18 crc kubenswrapper[4820]: I0203 12:37:18.866011 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="18ae976d-57fb-4c6e-8f3d-af9748d3058a" containerName="rabbitmq" containerID="cri-o://3c539d00fce621b18344b74f0c49d894626d0364db7118e68c2f1ca3ce327a39" gracePeriod=604794 Feb 03 12:37:19 crc kubenswrapper[4820]: I0203 12:37:19.388492 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="62eb6ec6-669b-476d-929f-919b7f533a5a" containerName="rabbitmq" containerID="cri-o://2f33cc05658334d8533fe376a75f11b566384089517933d61870c37049109c62" gracePeriod=604795 Feb 03 12:37:24 crc kubenswrapper[4820]: I0203 12:37:24.120858 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="18ae976d-57fb-4c6e-8f3d-af9748d3058a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Feb 03 12:37:24 crc kubenswrapper[4820]: I0203 12:37:24.391231 4820 scope.go:117] "RemoveContainer" containerID="787a1b81b7bb69681d5b9957147401714226f456d02afe437024d5c4f7a745b3" Feb 03 12:37:24 crc kubenswrapper[4820]: I0203 12:37:24.425402 4820 scope.go:117] "RemoveContainer" containerID="360a324625000b7a0475ad7525e796f30c7043aacaa84aa615bc8a7ff9641dd4" Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.751581 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="62eb6ec6-669b-476d-929f-919b7f533a5a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.833817 4820 generic.go:334] "Generic (PLEG): container finished" podID="62eb6ec6-669b-476d-929f-919b7f533a5a" containerID="2f33cc05658334d8533fe376a75f11b566384089517933d61870c37049109c62" exitCode=0 Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.833952 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"62eb6ec6-669b-476d-929f-919b7f533a5a","Type":"ContainerDied","Data":"2f33cc05658334d8533fe376a75f11b566384089517933d61870c37049109c62"} Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.840916 4820 generic.go:334] "Generic (PLEG): container finished" podID="18ae976d-57fb-4c6e-8f3d-af9748d3058a" containerID="3c539d00fce621b18344b74f0c49d894626d0364db7118e68c2f1ca3ce327a39" exitCode=0 Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.841094 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18ae976d-57fb-4c6e-8f3d-af9748d3058a","Type":"ContainerDied","Data":"3c539d00fce621b18344b74f0c49d894626d0364db7118e68c2f1ca3ce327a39"} Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.841381 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18ae976d-57fb-4c6e-8f3d-af9748d3058a","Type":"ContainerDied","Data":"0ec9aa4fbf3266740919ca7ff7726b09f4b2eed9b434942733dc3eee4f3cc140"} Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.841483 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ec9aa4fbf3266740919ca7ff7726b09f4b2eed9b434942733dc3eee4f3cc140" Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.970327 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.990933 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-confd\") pod \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.990974 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.991005 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-plugins-conf\") pod \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.991034 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wpg2\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-kube-api-access-6wpg2\") pod \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.991187 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18ae976d-57fb-4c6e-8f3d-af9748d3058a-pod-info\") pod \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.991224 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-erlang-cookie\") pod \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.991343 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-plugins\") pod \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.991373 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18ae976d-57fb-4c6e-8f3d-af9748d3058a-erlang-cookie-secret\") pod \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.991429 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-server-conf\") pod \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.991456 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-config-data\") pod \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.991485 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-tls\") pod \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\" (UID: \"18ae976d-57fb-4c6e-8f3d-af9748d3058a\") " Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.993756 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "18ae976d-57fb-4c6e-8f3d-af9748d3058a" (UID: "18ae976d-57fb-4c6e-8f3d-af9748d3058a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.995774 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "18ae976d-57fb-4c6e-8f3d-af9748d3058a" (UID: "18ae976d-57fb-4c6e-8f3d-af9748d3058a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:25 crc kubenswrapper[4820]: I0203 12:37:25.998242 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "18ae976d-57fb-4c6e-8f3d-af9748d3058a" (UID: "18ae976d-57fb-4c6e-8f3d-af9748d3058a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.005168 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "18ae976d-57fb-4c6e-8f3d-af9748d3058a" (UID: "18ae976d-57fb-4c6e-8f3d-af9748d3058a"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.007199 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18ae976d-57fb-4c6e-8f3d-af9748d3058a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "18ae976d-57fb-4c6e-8f3d-af9748d3058a" (UID: "18ae976d-57fb-4c6e-8f3d-af9748d3058a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.007346 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-kube-api-access-6wpg2" (OuterVolumeSpecName: "kube-api-access-6wpg2") pod "18ae976d-57fb-4c6e-8f3d-af9748d3058a" (UID: "18ae976d-57fb-4c6e-8f3d-af9748d3058a"). InnerVolumeSpecName "kube-api-access-6wpg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.008042 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "18ae976d-57fb-4c6e-8f3d-af9748d3058a" (UID: "18ae976d-57fb-4c6e-8f3d-af9748d3058a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.028239 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/18ae976d-57fb-4c6e-8f3d-af9748d3058a-pod-info" (OuterVolumeSpecName: "pod-info") pod "18ae976d-57fb-4c6e-8f3d-af9748d3058a" (UID: "18ae976d-57fb-4c6e-8f3d-af9748d3058a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.080857 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-config-data" (OuterVolumeSpecName: "config-data") pod "18ae976d-57fb-4c6e-8f3d-af9748d3058a" (UID: "18ae976d-57fb-4c6e-8f3d-af9748d3058a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.105778 4820 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18ae976d-57fb-4c6e-8f3d-af9748d3058a-pod-info\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.105825 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.105840 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.105849 4820 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18ae976d-57fb-4c6e-8f3d-af9748d3058a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.105858 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.105869 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.105940 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.105955 4820 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.105966 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wpg2\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-kube-api-access-6wpg2\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.170443 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-server-conf" (OuterVolumeSpecName: "server-conf") pod "18ae976d-57fb-4c6e-8f3d-af9748d3058a" (UID: "18ae976d-57fb-4c6e-8f3d-af9748d3058a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.499243 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.530675 4820 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18ae976d-57fb-4c6e-8f3d-af9748d3058a-server-conf\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.650827 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.726802 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.743661 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "18ae976d-57fb-4c6e-8f3d-af9748d3058a" (UID: "18ae976d-57fb-4c6e-8f3d-af9748d3058a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.763623 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"62eb6ec6-669b-476d-929f-919b7f533a5a\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.763776 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62eb6ec6-669b-476d-929f-919b7f533a5a-erlang-cookie-secret\") pod \"62eb6ec6-669b-476d-929f-919b7f533a5a\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.763832 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-plugins-conf\") pod \"62eb6ec6-669b-476d-929f-919b7f533a5a\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.763901 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-confd\") pod \"62eb6ec6-669b-476d-929f-919b7f533a5a\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.763936 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-server-conf\") pod \"62eb6ec6-669b-476d-929f-919b7f533a5a\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.763960 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-erlang-cookie\") pod \"62eb6ec6-669b-476d-929f-919b7f533a5a\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.763991 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62eb6ec6-669b-476d-929f-919b7f533a5a-pod-info\") pod \"62eb6ec6-669b-476d-929f-919b7f533a5a\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.764017 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-config-data\") pod \"62eb6ec6-669b-476d-929f-919b7f533a5a\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.764053 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9bzp\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-kube-api-access-k9bzp\") pod \"62eb6ec6-669b-476d-929f-919b7f533a5a\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.764164 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-tls\") pod \"62eb6ec6-669b-476d-929f-919b7f533a5a\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.764209 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-plugins\") pod \"62eb6ec6-669b-476d-929f-919b7f533a5a\" (UID: \"62eb6ec6-669b-476d-929f-919b7f533a5a\") " Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.764654 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18ae976d-57fb-4c6e-8f3d-af9748d3058a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.768143 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "62eb6ec6-669b-476d-929f-919b7f533a5a" (UID: "62eb6ec6-669b-476d-929f-919b7f533a5a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.768998 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "62eb6ec6-669b-476d-929f-919b7f533a5a" (UID: "62eb6ec6-669b-476d-929f-919b7f533a5a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.771622 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/62eb6ec6-669b-476d-929f-919b7f533a5a-pod-info" (OuterVolumeSpecName: "pod-info") pod "62eb6ec6-669b-476d-929f-919b7f533a5a" (UID: "62eb6ec6-669b-476d-929f-919b7f533a5a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.771650 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62eb6ec6-669b-476d-929f-919b7f533a5a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "62eb6ec6-669b-476d-929f-919b7f533a5a" (UID: "62eb6ec6-669b-476d-929f-919b7f533a5a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.784513 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "62eb6ec6-669b-476d-929f-919b7f533a5a" (UID: "62eb6ec6-669b-476d-929f-919b7f533a5a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.792540 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "62eb6ec6-669b-476d-929f-919b7f533a5a" (UID: "62eb6ec6-669b-476d-929f-919b7f533a5a"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.796124 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "62eb6ec6-669b-476d-929f-919b7f533a5a" (UID: "62eb6ec6-669b-476d-929f-919b7f533a5a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.804659 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-kube-api-access-k9bzp" (OuterVolumeSpecName: "kube-api-access-k9bzp") pod "62eb6ec6-669b-476d-929f-919b7f533a5a" (UID: "62eb6ec6-669b-476d-929f-919b7f533a5a"). InnerVolumeSpecName "kube-api-access-k9bzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.882209 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.882245 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.882274 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.882283 4820 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62eb6ec6-669b-476d-929f-919b7f533a5a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.882293 4820 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.882303 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.882312 4820 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62eb6ec6-669b-476d-929f-919b7f533a5a-pod-info\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.882369 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9bzp\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-kube-api-access-k9bzp\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.898189 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-server-conf" (OuterVolumeSpecName: "server-conf") pod "62eb6ec6-669b-476d-929f-919b7f533a5a" (UID: "62eb6ec6-669b-476d-929f-919b7f533a5a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.904401 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.904477 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.904986 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"62eb6ec6-669b-476d-929f-919b7f533a5a","Type":"ContainerDied","Data":"60cb45ce8b0ac8b7a5df2126023b00d100c70dae307674c4acd2bc7c0b89995c"} Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.905044 4820 scope.go:117] "RemoveContainer" containerID="2f33cc05658334d8533fe376a75f11b566384089517933d61870c37049109c62" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.919434 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-config-data" (OuterVolumeSpecName: "config-data") pod "62eb6ec6-669b-476d-929f-919b7f533a5a" (UID: "62eb6ec6-669b-476d-929f-919b7f533a5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.925573 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 03 12:37:26 crc kubenswrapper[4820]: I0203 12:37:26.946052 4820 scope.go:117] "RemoveContainer" containerID="16baf9e9f5f87ab6f2f078df976f2bf016e5daf6fdba848fbad1d934eb79f9e2" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.285430 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "62eb6ec6-669b-476d-929f-919b7f533a5a" (UID: "62eb6ec6-669b-476d-929f-919b7f533a5a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.286180 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62eb6ec6-669b-476d-929f-919b7f533a5a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.286216 4820 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-server-conf\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.286256 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62eb6ec6-669b-476d-929f-919b7f533a5a-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.286269 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.379952 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.403411 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.437544 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:37:27 crc kubenswrapper[4820]: E0203 12:37:27.438209 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ae976d-57fb-4c6e-8f3d-af9748d3058a" containerName="rabbitmq" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438236 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ae976d-57fb-4c6e-8f3d-af9748d3058a" containerName="rabbitmq" Feb 03 12:37:27 crc kubenswrapper[4820]: E0203 12:37:27.438256 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438263 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: E0203 12:37:27.438273 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9cb4d29-0ae3-4944-be74-dde6d01a37a7" containerName="extract-utilities" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438284 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9cb4d29-0ae3-4944-be74-dde6d01a37a7" containerName="extract-utilities" Feb 03 12:37:27 crc kubenswrapper[4820]: E0203 12:37:27.438297 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438302 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: E0203 12:37:27.438313 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9cb4d29-0ae3-4944-be74-dde6d01a37a7" containerName="extract-content" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438320 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9cb4d29-0ae3-4944-be74-dde6d01a37a7" containerName="extract-content" Feb 03 12:37:27 crc kubenswrapper[4820]: E0203 12:37:27.438327 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62eb6ec6-669b-476d-929f-919b7f533a5a" containerName="setup-container" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438333 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="62eb6ec6-669b-476d-929f-919b7f533a5a" containerName="setup-container" Feb 03 12:37:27 crc kubenswrapper[4820]: E0203 12:37:27.438353 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ae976d-57fb-4c6e-8f3d-af9748d3058a" containerName="setup-container" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438359 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ae976d-57fb-4c6e-8f3d-af9748d3058a" containerName="setup-container" Feb 03 12:37:27 crc kubenswrapper[4820]: E0203 12:37:27.438375 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62eb6ec6-669b-476d-929f-919b7f533a5a" containerName="rabbitmq" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438381 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="62eb6ec6-669b-476d-929f-919b7f533a5a" containerName="rabbitmq" Feb 03 12:37:27 crc kubenswrapper[4820]: E0203 12:37:27.438394 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon-log" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438399 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon-log" Feb 03 12:37:27 crc kubenswrapper[4820]: E0203 12:37:27.438414 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9cb4d29-0ae3-4944-be74-dde6d01a37a7" containerName="registry-server" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438420 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9cb4d29-0ae3-4944-be74-dde6d01a37a7" containerName="registry-server" Feb 03 12:37:27 crc kubenswrapper[4820]: E0203 12:37:27.438429 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438434 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438622 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438635 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438648 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon-log" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438657 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9cb4d29-0ae3-4944-be74-dde6d01a37a7" containerName="registry-server" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438666 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="62eb6ec6-669b-476d-929f-919b7f533a5a" containerName="rabbitmq" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438674 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438685 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438692 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="18ae976d-57fb-4c6e-8f3d-af9748d3058a" containerName="rabbitmq" Feb 03 12:37:27 crc kubenswrapper[4820]: E0203 12:37:27.438966 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.438978 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: E0203 12:37:27.438999 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.439007 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.439225 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c371f7-f032-4444-8d4b-1183a224c7b0" containerName="horizon" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.440123 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.442758 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.442819 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6q5vv" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.446030 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.446422 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.446575 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.446607 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.446733 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.480031 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.541287 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.554962 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.580078 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.585438 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.588644 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.591332 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jmczl" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.591598 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.591815 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.592064 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.592265 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.593319 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.594478 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.594563 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.594585 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.594606 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.594631 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.594686 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr5bd\" (UniqueName: \"kubernetes.io/projected/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-kube-api-access-cr5bd\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.594712 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.594729 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.594756 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.594831 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.594868 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-config-data\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.595948 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696470 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c2cfe24f-4614-4f48-867c-722af03baad7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696530 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-config-data\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696577 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696602 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c2cfe24f-4614-4f48-867c-722af03baad7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696643 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrqhm\" (UniqueName: \"kubernetes.io/projected/c2cfe24f-4614-4f48-867c-722af03baad7-kube-api-access-nrqhm\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696665 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696683 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696700 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c2cfe24f-4614-4f48-867c-722af03baad7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696720 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696741 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696766 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696794 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c2cfe24f-4614-4f48-867c-722af03baad7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696813 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c2cfe24f-4614-4f48-867c-722af03baad7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696843 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr5bd\" (UniqueName: \"kubernetes.io/projected/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-kube-api-access-cr5bd\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696878 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696920 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696956 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.696990 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c2cfe24f-4614-4f48-867c-722af03baad7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.697076 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.697110 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c2cfe24f-4614-4f48-867c-722af03baad7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.697140 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2cfe24f-4614-4f48-867c-722af03baad7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.697166 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c2cfe24f-4614-4f48-867c-722af03baad7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.697389 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.698312 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.698336 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-config-data\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.698701 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.699933 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.700488 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.715309 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.719421 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.720158 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.723486 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.734626 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr5bd\" (UniqueName: \"kubernetes.io/projected/ed109a9d-a703-4fa2-b7b3-0b96760d52b1-kube-api-access-cr5bd\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.785295 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"ed109a9d-a703-4fa2-b7b3-0b96760d52b1\") " pod="openstack/rabbitmq-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.800113 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c2cfe24f-4614-4f48-867c-722af03baad7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.800212 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2cfe24f-4614-4f48-867c-722af03baad7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.800241 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c2cfe24f-4614-4f48-867c-722af03baad7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.800287 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c2cfe24f-4614-4f48-867c-722af03baad7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.800784 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c2cfe24f-4614-4f48-867c-722af03baad7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.801183 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrqhm\" (UniqueName: \"kubernetes.io/projected/c2cfe24f-4614-4f48-867c-722af03baad7-kube-api-access-nrqhm\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.801238 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c2cfe24f-4614-4f48-867c-722af03baad7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.801431 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.801506 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c2cfe24f-4614-4f48-867c-722af03baad7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.801542 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c2cfe24f-4614-4f48-867c-722af03baad7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.801869 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c2cfe24f-4614-4f48-867c-722af03baad7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.804354 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c2cfe24f-4614-4f48-867c-722af03baad7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.806093 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c2cfe24f-4614-4f48-867c-722af03baad7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.806122 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c2cfe24f-4614-4f48-867c-722af03baad7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.806300 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.806522 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2cfe24f-4614-4f48-867c-722af03baad7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.810242 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c2cfe24f-4614-4f48-867c-722af03baad7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.811529 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c2cfe24f-4614-4f48-867c-722af03baad7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.811603 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c2cfe24f-4614-4f48-867c-722af03baad7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.816831 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c2cfe24f-4614-4f48-867c-722af03baad7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.821558 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c2cfe24f-4614-4f48-867c-722af03baad7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.828738 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrqhm\" (UniqueName: \"kubernetes.io/projected/c2cfe24f-4614-4f48-867c-722af03baad7-kube-api-access-nrqhm\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.871367 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c2cfe24f-4614-4f48-867c-722af03baad7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:27 crc kubenswrapper[4820]: I0203 12:37:27.910697 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:37:28 crc kubenswrapper[4820]: I0203 12:37:28.078309 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 12:37:28 crc kubenswrapper[4820]: I0203 12:37:28.542936 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:37:28 crc kubenswrapper[4820]: W0203 12:37:28.578415 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2cfe24f_4614_4f48_867c_722af03baad7.slice/crio-34714c0b8c9e066f201b0d096b4db22fbdadd89ec7cc17bcf4ca0f7a76fd7ec8 WatchSource:0}: Error finding container 34714c0b8c9e066f201b0d096b4db22fbdadd89ec7cc17bcf4ca0f7a76fd7ec8: Status 404 returned error can't find the container with id 34714c0b8c9e066f201b0d096b4db22fbdadd89ec7cc17bcf4ca0f7a76fd7ec8 Feb 03 12:37:28 crc kubenswrapper[4820]: I0203 12:37:28.933904 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c2cfe24f-4614-4f48-867c-722af03baad7","Type":"ContainerStarted","Data":"34714c0b8c9e066f201b0d096b4db22fbdadd89ec7cc17bcf4ca0f7a76fd7ec8"} Feb 03 12:37:28 crc kubenswrapper[4820]: I0203 12:37:28.963961 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:37:28 crc kubenswrapper[4820]: W0203 12:37:28.980654 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded109a9d_a703_4fa2_b7b3_0b96760d52b1.slice/crio-068a8b8cc86736410a5392fcc0b7e156432e744aa6c27ddb67de053007aa816a WatchSource:0}: Error finding container 068a8b8cc86736410a5392fcc0b7e156432e744aa6c27ddb67de053007aa816a: Status 404 returned error can't find the container with id 068a8b8cc86736410a5392fcc0b7e156432e744aa6c27ddb67de053007aa816a Feb 03 12:37:29 crc kubenswrapper[4820]: I0203 12:37:29.191628 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18ae976d-57fb-4c6e-8f3d-af9748d3058a" path="/var/lib/kubelet/pods/18ae976d-57fb-4c6e-8f3d-af9748d3058a/volumes" Feb 03 12:37:29 crc kubenswrapper[4820]: I0203 12:37:29.192846 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62eb6ec6-669b-476d-929f-919b7f533a5a" path="/var/lib/kubelet/pods/62eb6ec6-669b-476d-929f-919b7f533a5a/volumes" Feb 03 12:37:29 crc kubenswrapper[4820]: I0203 12:37:29.945864 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ed109a9d-a703-4fa2-b7b3-0b96760d52b1","Type":"ContainerStarted","Data":"068a8b8cc86736410a5392fcc0b7e156432e744aa6c27ddb67de053007aa816a"} Feb 03 12:37:30 crc kubenswrapper[4820]: I0203 12:37:30.910953 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mb2zc"] Feb 03 12:37:30 crc kubenswrapper[4820]: I0203 12:37:30.913682 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:30 crc kubenswrapper[4820]: I0203 12:37:30.923775 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 03 12:37:30 crc kubenswrapper[4820]: I0203 12:37:30.931706 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mb2zc"] Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.092863 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.093238 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.093360 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.094542 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.094738 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.094833 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-config\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.095042 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v42cq\" (UniqueName: \"kubernetes.io/projected/d2c69005-c424-404e-9087-43c0fa3ca83c-kube-api-access-v42cq\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.196908 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.197050 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.197072 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.197148 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.197195 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.197221 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-config\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.197259 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v42cq\" (UniqueName: \"kubernetes.io/projected/d2c69005-c424-404e-9087-43c0fa3ca83c-kube-api-access-v42cq\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.198722 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.198813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.199062 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-config\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.199256 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.199294 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.200206 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.219623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v42cq\" (UniqueName: \"kubernetes.io/projected/d2c69005-c424-404e-9087-43c0fa3ca83c-kube-api-access-v42cq\") pod \"dnsmasq-dns-79bd4cc8c9-mb2zc\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.247224 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:31 crc kubenswrapper[4820]: W0203 12:37:31.772451 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2c69005_c424_404e_9087_43c0fa3ca83c.slice/crio-3bb7597d3b0a1c74280395dd91539f3078038270b264b0a6923f0434262216a7 WatchSource:0}: Error finding container 3bb7597d3b0a1c74280395dd91539f3078038270b264b0a6923f0434262216a7: Status 404 returned error can't find the container with id 3bb7597d3b0a1c74280395dd91539f3078038270b264b0a6923f0434262216a7 Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.774363 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mb2zc"] Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.980363 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" event={"ID":"d2c69005-c424-404e-9087-43c0fa3ca83c","Type":"ContainerStarted","Data":"3bb7597d3b0a1c74280395dd91539f3078038270b264b0a6923f0434262216a7"} Feb 03 12:37:31 crc kubenswrapper[4820]: I0203 12:37:31.983854 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ed109a9d-a703-4fa2-b7b3-0b96760d52b1","Type":"ContainerStarted","Data":"ac56fdc5c4e07f25525dd55e5a65c532ec636f22354acd0a0dd73348ce469f84"} Feb 03 12:37:32 crc kubenswrapper[4820]: I0203 12:37:32.999049 4820 generic.go:334] "Generic (PLEG): container finished" podID="d2c69005-c424-404e-9087-43c0fa3ca83c" containerID="a06fb59a6f171eb6ce50ab06db17de93df8a12e440bc9f1995ff8ecaa05e7118" exitCode=0 Feb 03 12:37:32 crc kubenswrapper[4820]: I0203 12:37:32.999156 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" event={"ID":"d2c69005-c424-404e-9087-43c0fa3ca83c","Type":"ContainerDied","Data":"a06fb59a6f171eb6ce50ab06db17de93df8a12e440bc9f1995ff8ecaa05e7118"} Feb 03 12:37:34 crc kubenswrapper[4820]: I0203 12:37:34.132882 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" event={"ID":"d2c69005-c424-404e-9087-43c0fa3ca83c","Type":"ContainerStarted","Data":"5417406d09f89a7f8feac81e024bc5637dee5d513ed83d5634da02ba7e34c49b"} Feb 03 12:37:34 crc kubenswrapper[4820]: I0203 12:37:34.133363 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.248062 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.279773 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" podStartSLOduration=11.279749707 podStartE2EDuration="11.279749707s" podCreationTimestamp="2026-02-03 12:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:37:34.176015056 +0000 UTC m=+1971.699090940" watchObservedRunningTime="2026-02-03 12:37:41.279749707 +0000 UTC m=+1978.802825571" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.341280 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mrmgt"] Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.341598 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" podUID="a655153d-67ad-489c-b58e-3ddc02470bac" containerName="dnsmasq-dns" containerID="cri-o://aee9f76e0d96596fb9dd121e63a9139a9357ed5b155c56d68998e4d62a9bf664" gracePeriod=10 Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.532549 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-kz5f5"] Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.534520 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.550798 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-kz5f5"] Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.689459 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-config\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.689602 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.689656 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxgz\" (UniqueName: \"kubernetes.io/projected/a42a2742-e704-482e-ac37-5c948277f576-kube-api-access-xlxgz\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.689694 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.689772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.689911 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.689950 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.792626 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.792682 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.792728 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-config\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.792828 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.792866 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlxgz\" (UniqueName: \"kubernetes.io/projected/a42a2742-e704-482e-ac37-5c948277f576-kube-api-access-xlxgz\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.792904 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.792964 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.793998 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.794055 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.794051 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-config\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.794857 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.795002 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.795670 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a42a2742-e704-482e-ac37-5c948277f576-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.821592 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlxgz\" (UniqueName: \"kubernetes.io/projected/a42a2742-e704-482e-ac37-5c948277f576-kube-api-access-xlxgz\") pod \"dnsmasq-dns-6cd9bffc9-kz5f5\" (UID: \"a42a2742-e704-482e-ac37-5c948277f576\") " pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:41 crc kubenswrapper[4820]: I0203 12:37:41.891441 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.000265 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.203388 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss4pj\" (UniqueName: \"kubernetes.io/projected/a655153d-67ad-489c-b58e-3ddc02470bac-kube-api-access-ss4pj\") pod \"a655153d-67ad-489c-b58e-3ddc02470bac\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.203787 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-config\") pod \"a655153d-67ad-489c-b58e-3ddc02470bac\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.203919 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-ovsdbserver-sb\") pod \"a655153d-67ad-489c-b58e-3ddc02470bac\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.203978 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-dns-swift-storage-0\") pod \"a655153d-67ad-489c-b58e-3ddc02470bac\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.204079 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-dns-svc\") pod \"a655153d-67ad-489c-b58e-3ddc02470bac\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.204126 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-ovsdbserver-nb\") pod \"a655153d-67ad-489c-b58e-3ddc02470bac\" (UID: \"a655153d-67ad-489c-b58e-3ddc02470bac\") " Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.211810 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a655153d-67ad-489c-b58e-3ddc02470bac-kube-api-access-ss4pj" (OuterVolumeSpecName: "kube-api-access-ss4pj") pod "a655153d-67ad-489c-b58e-3ddc02470bac" (UID: "a655153d-67ad-489c-b58e-3ddc02470bac"). InnerVolumeSpecName "kube-api-access-ss4pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.269304 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a655153d-67ad-489c-b58e-3ddc02470bac" (UID: "a655153d-67ad-489c-b58e-3ddc02470bac"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.292215 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a655153d-67ad-489c-b58e-3ddc02470bac" (UID: "a655153d-67ad-489c-b58e-3ddc02470bac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.293043 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a655153d-67ad-489c-b58e-3ddc02470bac" (UID: "a655153d-67ad-489c-b58e-3ddc02470bac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.295740 4820 generic.go:334] "Generic (PLEG): container finished" podID="a655153d-67ad-489c-b58e-3ddc02470bac" containerID="aee9f76e0d96596fb9dd121e63a9139a9357ed5b155c56d68998e4d62a9bf664" exitCode=0 Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.295807 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" event={"ID":"a655153d-67ad-489c-b58e-3ddc02470bac","Type":"ContainerDied","Data":"aee9f76e0d96596fb9dd121e63a9139a9357ed5b155c56d68998e4d62a9bf664"} Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.295844 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" event={"ID":"a655153d-67ad-489c-b58e-3ddc02470bac","Type":"ContainerDied","Data":"5add7ab283120cd1e1095871edb85855ad6ad74c3dadf9fd3296444c83ea2a38"} Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.295902 4820 scope.go:117] "RemoveContainer" containerID="aee9f76e0d96596fb9dd121e63a9139a9357ed5b155c56d68998e4d62a9bf664" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.296057 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mrmgt" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.297579 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a655153d-67ad-489c-b58e-3ddc02470bac" (UID: "a655153d-67ad-489c-b58e-3ddc02470bac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.306738 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.306775 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.306787 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss4pj\" (UniqueName: \"kubernetes.io/projected/a655153d-67ad-489c-b58e-3ddc02470bac-kube-api-access-ss4pj\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.306797 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.306805 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.329071 4820 scope.go:117] "RemoveContainer" containerID="0d42bb656e77f0983241227008ea99a60d852dc79461d69e29b37ddb3135f109" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.342436 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-config" (OuterVolumeSpecName: "config") pod "a655153d-67ad-489c-b58e-3ddc02470bac" (UID: "a655153d-67ad-489c-b58e-3ddc02470bac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.382796 4820 scope.go:117] "RemoveContainer" containerID="aee9f76e0d96596fb9dd121e63a9139a9357ed5b155c56d68998e4d62a9bf664" Feb 03 12:37:42 crc kubenswrapper[4820]: E0203 12:37:42.383528 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aee9f76e0d96596fb9dd121e63a9139a9357ed5b155c56d68998e4d62a9bf664\": container with ID starting with aee9f76e0d96596fb9dd121e63a9139a9357ed5b155c56d68998e4d62a9bf664 not found: ID does not exist" containerID="aee9f76e0d96596fb9dd121e63a9139a9357ed5b155c56d68998e4d62a9bf664" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.383572 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee9f76e0d96596fb9dd121e63a9139a9357ed5b155c56d68998e4d62a9bf664"} err="failed to get container status \"aee9f76e0d96596fb9dd121e63a9139a9357ed5b155c56d68998e4d62a9bf664\": rpc error: code = NotFound desc = could not find container \"aee9f76e0d96596fb9dd121e63a9139a9357ed5b155c56d68998e4d62a9bf664\": container with ID starting with aee9f76e0d96596fb9dd121e63a9139a9357ed5b155c56d68998e4d62a9bf664 not found: ID does not exist" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.383604 4820 scope.go:117] "RemoveContainer" containerID="0d42bb656e77f0983241227008ea99a60d852dc79461d69e29b37ddb3135f109" Feb 03 12:37:42 crc kubenswrapper[4820]: E0203 12:37:42.384469 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d42bb656e77f0983241227008ea99a60d852dc79461d69e29b37ddb3135f109\": container with ID starting with 0d42bb656e77f0983241227008ea99a60d852dc79461d69e29b37ddb3135f109 not found: ID does not exist" containerID="0d42bb656e77f0983241227008ea99a60d852dc79461d69e29b37ddb3135f109" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.384563 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d42bb656e77f0983241227008ea99a60d852dc79461d69e29b37ddb3135f109"} err="failed to get container status \"0d42bb656e77f0983241227008ea99a60d852dc79461d69e29b37ddb3135f109\": rpc error: code = NotFound desc = could not find container \"0d42bb656e77f0983241227008ea99a60d852dc79461d69e29b37ddb3135f109\": container with ID starting with 0d42bb656e77f0983241227008ea99a60d852dc79461d69e29b37ddb3135f109 not found: ID does not exist" Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.408696 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-kz5f5"] Feb 03 12:37:42 crc kubenswrapper[4820]: I0203 12:37:42.410416 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a655153d-67ad-489c-b58e-3ddc02470bac-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:43 crc kubenswrapper[4820]: I0203 12:37:43.070057 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mrmgt"] Feb 03 12:37:43 crc kubenswrapper[4820]: I0203 12:37:43.091215 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mrmgt"] Feb 03 12:37:43 crc kubenswrapper[4820]: I0203 12:37:43.190829 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a655153d-67ad-489c-b58e-3ddc02470bac" path="/var/lib/kubelet/pods/a655153d-67ad-489c-b58e-3ddc02470bac/volumes" Feb 03 12:37:43 crc kubenswrapper[4820]: I0203 12:37:43.349584 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" event={"ID":"a42a2742-e704-482e-ac37-5c948277f576","Type":"ContainerStarted","Data":"a9e65e488a031726fbc59ed641a0809c32ba554f2986644763e1458984f271d2"} Feb 03 12:37:44 crc kubenswrapper[4820]: I0203 12:37:44.361030 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c2cfe24f-4614-4f48-867c-722af03baad7","Type":"ContainerStarted","Data":"74562e43cb301c62795a310ba490032eab426a206ce4e7ad31217ed4c501dce6"} Feb 03 12:37:44 crc kubenswrapper[4820]: I0203 12:37:44.365525 4820 generic.go:334] "Generic (PLEG): container finished" podID="a42a2742-e704-482e-ac37-5c948277f576" containerID="4487e6e6f4dd3beb90e34e76cdeb9b3683e613cf1ae19811103672810191ef76" exitCode=0 Feb 03 12:37:44 crc kubenswrapper[4820]: I0203 12:37:44.365575 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" event={"ID":"a42a2742-e704-482e-ac37-5c948277f576","Type":"ContainerDied","Data":"4487e6e6f4dd3beb90e34e76cdeb9b3683e613cf1ae19811103672810191ef76"} Feb 03 12:37:45 crc kubenswrapper[4820]: I0203 12:37:45.379063 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" event={"ID":"a42a2742-e704-482e-ac37-5c948277f576","Type":"ContainerStarted","Data":"989fdd32fe97aa32cb4d5c2a9566a9141dc5fab566a65e477510d719c106987e"} Feb 03 12:37:45 crc kubenswrapper[4820]: I0203 12:37:45.408532 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" podStartSLOduration=4.408507973 podStartE2EDuration="4.408507973s" podCreationTimestamp="2026-02-03 12:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:37:45.402591244 +0000 UTC m=+1982.925667128" watchObservedRunningTime="2026-02-03 12:37:45.408507973 +0000 UTC m=+1982.931583837" Feb 03 12:37:46 crc kubenswrapper[4820]: I0203 12:37:46.390142 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:51 crc kubenswrapper[4820]: I0203 12:37:51.893155 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cd9bffc9-kz5f5" Feb 03 12:37:51 crc kubenswrapper[4820]: I0203 12:37:51.958089 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mb2zc"] Feb 03 12:37:51 crc kubenswrapper[4820]: I0203 12:37:51.958471 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" podUID="d2c69005-c424-404e-9087-43c0fa3ca83c" containerName="dnsmasq-dns" containerID="cri-o://5417406d09f89a7f8feac81e024bc5637dee5d513ed83d5634da02ba7e34c49b" gracePeriod=10 Feb 03 12:37:52 crc kubenswrapper[4820]: I0203 12:37:52.558574 4820 generic.go:334] "Generic (PLEG): container finished" podID="d2c69005-c424-404e-9087-43c0fa3ca83c" containerID="5417406d09f89a7f8feac81e024bc5637dee5d513ed83d5634da02ba7e34c49b" exitCode=0 Feb 03 12:37:52 crc kubenswrapper[4820]: I0203 12:37:52.558977 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" event={"ID":"d2c69005-c424-404e-9087-43c0fa3ca83c","Type":"ContainerDied","Data":"5417406d09f89a7f8feac81e024bc5637dee5d513ed83d5634da02ba7e34c49b"} Feb 03 12:37:52 crc kubenswrapper[4820]: I0203 12:37:52.894754 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.006321 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v42cq\" (UniqueName: \"kubernetes.io/projected/d2c69005-c424-404e-9087-43c0fa3ca83c-kube-api-access-v42cq\") pod \"d2c69005-c424-404e-9087-43c0fa3ca83c\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.007171 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-config\") pod \"d2c69005-c424-404e-9087-43c0fa3ca83c\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.007240 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-ovsdbserver-sb\") pod \"d2c69005-c424-404e-9087-43c0fa3ca83c\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.007261 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-dns-svc\") pod \"d2c69005-c424-404e-9087-43c0fa3ca83c\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.007311 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-ovsdbserver-nb\") pod \"d2c69005-c424-404e-9087-43c0fa3ca83c\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.007382 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-openstack-edpm-ipam\") pod \"d2c69005-c424-404e-9087-43c0fa3ca83c\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.007424 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-dns-swift-storage-0\") pod \"d2c69005-c424-404e-9087-43c0fa3ca83c\" (UID: \"d2c69005-c424-404e-9087-43c0fa3ca83c\") " Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.014641 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c69005-c424-404e-9087-43c0fa3ca83c-kube-api-access-v42cq" (OuterVolumeSpecName: "kube-api-access-v42cq") pod "d2c69005-c424-404e-9087-43c0fa3ca83c" (UID: "d2c69005-c424-404e-9087-43c0fa3ca83c"). InnerVolumeSpecName "kube-api-access-v42cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.376409 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v42cq\" (UniqueName: \"kubernetes.io/projected/d2c69005-c424-404e-9087-43c0fa3ca83c-kube-api-access-v42cq\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.445439 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2c69005-c424-404e-9087-43c0fa3ca83c" (UID: "d2c69005-c424-404e-9087-43c0fa3ca83c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.453293 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2c69005-c424-404e-9087-43c0fa3ca83c" (UID: "d2c69005-c424-404e-9087-43c0fa3ca83c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.458768 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2c69005-c424-404e-9087-43c0fa3ca83c" (UID: "d2c69005-c424-404e-9087-43c0fa3ca83c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.474466 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2c69005-c424-404e-9087-43c0fa3ca83c" (UID: "d2c69005-c424-404e-9087-43c0fa3ca83c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.475705 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-config" (OuterVolumeSpecName: "config") pod "d2c69005-c424-404e-9087-43c0fa3ca83c" (UID: "d2c69005-c424-404e-9087-43c0fa3ca83c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.480434 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.480471 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.480489 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.480504 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.480516 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.492500 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d2c69005-c424-404e-9087-43c0fa3ca83c" (UID: "d2c69005-c424-404e-9087-43c0fa3ca83c"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.573177 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" event={"ID":"d2c69005-c424-404e-9087-43c0fa3ca83c","Type":"ContainerDied","Data":"3bb7597d3b0a1c74280395dd91539f3078038270b264b0a6923f0434262216a7"} Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.573234 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mb2zc" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.573280 4820 scope.go:117] "RemoveContainer" containerID="5417406d09f89a7f8feac81e024bc5637dee5d513ed83d5634da02ba7e34c49b" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.582461 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d2c69005-c424-404e-9087-43c0fa3ca83c-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.604260 4820 scope.go:117] "RemoveContainer" containerID="a06fb59a6f171eb6ce50ab06db17de93df8a12e440bc9f1995ff8ecaa05e7118" Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.614077 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mb2zc"] Feb 03 12:37:53 crc kubenswrapper[4820]: I0203 12:37:53.630670 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mb2zc"] Feb 03 12:37:55 crc kubenswrapper[4820]: I0203 12:37:55.157761 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c69005-c424-404e-9087-43c0fa3ca83c" path="/var/lib/kubelet/pods/d2c69005-c424-404e-9087-43c0fa3ca83c/volumes" Feb 03 12:38:04 crc kubenswrapper[4820]: I0203 12:38:04.371048 4820 generic.go:334] "Generic (PLEG): container finished" podID="ed109a9d-a703-4fa2-b7b3-0b96760d52b1" containerID="ac56fdc5c4e07f25525dd55e5a65c532ec636f22354acd0a0dd73348ce469f84" exitCode=0 Feb 03 12:38:04 crc kubenswrapper[4820]: I0203 12:38:04.371181 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ed109a9d-a703-4fa2-b7b3-0b96760d52b1","Type":"ContainerDied","Data":"ac56fdc5c4e07f25525dd55e5a65c532ec636f22354acd0a0dd73348ce469f84"} Feb 03 12:38:05 crc kubenswrapper[4820]: I0203 12:38:05.396624 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ed109a9d-a703-4fa2-b7b3-0b96760d52b1","Type":"ContainerStarted","Data":"e38f30c1c8ce823b0c731af923bce724086a4ad4f2278c83b9811d8df093cf70"} Feb 03 12:38:05 crc kubenswrapper[4820]: I0203 12:38:05.400282 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 03 12:38:05 crc kubenswrapper[4820]: I0203 12:38:05.440264 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.440224239 podStartE2EDuration="38.440224239s" podCreationTimestamp="2026-02-03 12:37:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:38:05.434692559 +0000 UTC m=+2002.957768453" watchObservedRunningTime="2026-02-03 12:38:05.440224239 +0000 UTC m=+2002.963300133" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.587210 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4"] Feb 03 12:38:10 crc kubenswrapper[4820]: E0203 12:38:10.588425 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c69005-c424-404e-9087-43c0fa3ca83c" containerName="init" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.588456 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c69005-c424-404e-9087-43c0fa3ca83c" containerName="init" Feb 03 12:38:10 crc kubenswrapper[4820]: E0203 12:38:10.588481 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c69005-c424-404e-9087-43c0fa3ca83c" containerName="dnsmasq-dns" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.588489 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c69005-c424-404e-9087-43c0fa3ca83c" containerName="dnsmasq-dns" Feb 03 12:38:10 crc kubenswrapper[4820]: E0203 12:38:10.588513 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a655153d-67ad-489c-b58e-3ddc02470bac" containerName="init" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.588521 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a655153d-67ad-489c-b58e-3ddc02470bac" containerName="init" Feb 03 12:38:10 crc kubenswrapper[4820]: E0203 12:38:10.588553 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a655153d-67ad-489c-b58e-3ddc02470bac" containerName="dnsmasq-dns" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.588561 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a655153d-67ad-489c-b58e-3ddc02470bac" containerName="dnsmasq-dns" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.588855 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c69005-c424-404e-9087-43c0fa3ca83c" containerName="dnsmasq-dns" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.588922 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a655153d-67ad-489c-b58e-3ddc02470bac" containerName="dnsmasq-dns" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.590014 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.595045 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.595264 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.596591 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.596726 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.607685 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4"] Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.720596 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.720851 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.721214 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcpfd\" (UniqueName: \"kubernetes.io/projected/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-kube-api-access-xcpfd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.721455 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.823294 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcpfd\" (UniqueName: \"kubernetes.io/projected/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-kube-api-access-xcpfd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.823428 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.823570 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.823725 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.830178 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.830522 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.830849 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.850614 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcpfd\" (UniqueName: \"kubernetes.io/projected/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-kube-api-access-xcpfd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:10 crc kubenswrapper[4820]: I0203 12:38:10.913450 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:11 crc kubenswrapper[4820]: I0203 12:38:11.708529 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4"] Feb 03 12:38:12 crc kubenswrapper[4820]: I0203 12:38:12.473153 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" event={"ID":"d8d69bce-1404-4fce-ab56-a8d4c9f46b28","Type":"ContainerStarted","Data":"9aff887094e045a46a9357ea548f387e1d1e6bab8fafabe6cf0c261101e430a2"} Feb 03 12:38:17 crc kubenswrapper[4820]: I0203 12:38:17.548136 4820 generic.go:334] "Generic (PLEG): container finished" podID="c2cfe24f-4614-4f48-867c-722af03baad7" containerID="74562e43cb301c62795a310ba490032eab426a206ce4e7ad31217ed4c501dce6" exitCode=0 Feb 03 12:38:17 crc kubenswrapper[4820]: I0203 12:38:17.548238 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c2cfe24f-4614-4f48-867c-722af03baad7","Type":"ContainerDied","Data":"74562e43cb301c62795a310ba490032eab426a206ce4e7ad31217ed4c501dce6"} Feb 03 12:38:18 crc kubenswrapper[4820]: I0203 12:38:18.082321 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 03 12:38:24 crc kubenswrapper[4820]: I0203 12:38:24.845081 4820 scope.go:117] "RemoveContainer" containerID="4a53813e41c254410fc09ac01ef8c86d93edb0f17f0e4248ee1cd9f77a8c295a" Feb 03 12:38:26 crc kubenswrapper[4820]: E0203 12:38:26.622815 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Feb 03 12:38:26 crc kubenswrapper[4820]: I0203 12:38:26.623146 4820 scope.go:117] "RemoveContainer" containerID="3c539d00fce621b18344b74f0c49d894626d0364db7118e68c2f1ca3ce327a39" Feb 03 12:38:26 crc kubenswrapper[4820]: E0203 12:38:26.623426 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 03 12:38:26 crc kubenswrapper[4820]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Feb 03 12:38:26 crc kubenswrapper[4820]: - hosts: all Feb 03 12:38:26 crc kubenswrapper[4820]: strategy: linear Feb 03 12:38:26 crc kubenswrapper[4820]: tasks: Feb 03 12:38:26 crc kubenswrapper[4820]: - name: Enable podified-repos Feb 03 12:38:26 crc kubenswrapper[4820]: become: true Feb 03 12:38:26 crc kubenswrapper[4820]: ansible.builtin.shell: | Feb 03 12:38:26 crc kubenswrapper[4820]: set -euxo pipefail Feb 03 12:38:26 crc kubenswrapper[4820]: pushd /var/tmp Feb 03 12:38:26 crc kubenswrapper[4820]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Feb 03 12:38:26 crc kubenswrapper[4820]: pushd repo-setup-main Feb 03 12:38:26 crc kubenswrapper[4820]: python3 -m venv ./venv Feb 03 12:38:26 crc kubenswrapper[4820]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Feb 03 12:38:26 crc kubenswrapper[4820]: ./venv/bin/repo-setup current-podified -b antelope Feb 03 12:38:26 crc kubenswrapper[4820]: popd Feb 03 12:38:26 crc kubenswrapper[4820]: rm -rf repo-setup-main Feb 03 12:38:26 crc kubenswrapper[4820]: Feb 03 12:38:26 crc kubenswrapper[4820]: Feb 03 12:38:26 crc kubenswrapper[4820]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Feb 03 12:38:26 crc kubenswrapper[4820]: edpm_override_hosts: openstack-edpm-ipam Feb 03 12:38:26 crc kubenswrapper[4820]: edpm_service_type: repo-setup Feb 03 12:38:26 crc kubenswrapper[4820]: Feb 03 12:38:26 crc kubenswrapper[4820]: Feb 03 12:38:26 crc kubenswrapper[4820]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcpfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4_openstack(d8d69bce-1404-4fce-ab56-a8d4c9f46b28): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Feb 03 12:38:26 crc kubenswrapper[4820]: > logger="UnhandledError" Feb 03 12:38:26 crc kubenswrapper[4820]: E0203 12:38:26.624576 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" podUID="d8d69bce-1404-4fce-ab56-a8d4c9f46b28" Feb 03 12:38:26 crc kubenswrapper[4820]: E0203 12:38:26.713050 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" podUID="d8d69bce-1404-4fce-ab56-a8d4c9f46b28" Feb 03 12:38:26 crc kubenswrapper[4820]: I0203 12:38:26.835791 4820 scope.go:117] "RemoveContainer" containerID="c00ad844e32c489174381c6526e5130ed9386fb05d2b64492830635833bef5b3" Feb 03 12:38:26 crc kubenswrapper[4820]: I0203 12:38:26.926012 4820 scope.go:117] "RemoveContainer" containerID="01f33736537d1d861f9cbc69acabf2f7c348172743098ad018a815dcacf58cfe" Feb 03 12:38:26 crc kubenswrapper[4820]: I0203 12:38:26.952682 4820 scope.go:117] "RemoveContainer" containerID="618cfceb67ae402aadfac828e372715348e71b2bceffa2d52373046aba9a6cca" Feb 03 12:38:26 crc kubenswrapper[4820]: I0203 12:38:26.978512 4820 scope.go:117] "RemoveContainer" containerID="c10ede440eb95f68092eed228fe7a5cbe1cfc99cc437c5af3a0964e3f2b6c398" Feb 03 12:38:27 crc kubenswrapper[4820]: I0203 12:38:27.007144 4820 scope.go:117] "RemoveContainer" containerID="d612338d650e9185ebb3ad6d02cb11504c7cbe592261cf4c4e977d3faf21db66" Feb 03 12:38:27 crc kubenswrapper[4820]: I0203 12:38:27.729247 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c2cfe24f-4614-4f48-867c-722af03baad7","Type":"ContainerStarted","Data":"805780706fc53ed508408f9772b287086c220c82c9672d2897e9f93752caf6a5"} Feb 03 12:38:27 crc kubenswrapper[4820]: I0203 12:38:27.730619 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:38:27 crc kubenswrapper[4820]: I0203 12:38:27.779953 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=60.779932327 podStartE2EDuration="1m0.779932327s" podCreationTimestamp="2026-02-03 12:37:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:38:27.7662948 +0000 UTC m=+2025.289370684" watchObservedRunningTime="2026-02-03 12:38:27.779932327 +0000 UTC m=+2025.303008191" Feb 03 12:38:37 crc kubenswrapper[4820]: I0203 12:38:37.914151 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:38:39 crc kubenswrapper[4820]: I0203 12:38:39.864652 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" event={"ID":"d8d69bce-1404-4fce-ab56-a8d4c9f46b28","Type":"ContainerStarted","Data":"54bf82c9a842c9db3e5e1497b0f5b1769a62eee0149fe246e6ce3e0dc70138c5"} Feb 03 12:38:39 crc kubenswrapper[4820]: I0203 12:38:39.899444 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" podStartSLOduration=2.926259371 podStartE2EDuration="29.899407405s" podCreationTimestamp="2026-02-03 12:38:10 +0000 UTC" firstStartedPulling="2026-02-03 12:38:11.717921381 +0000 UTC m=+2009.240997245" lastFinishedPulling="2026-02-03 12:38:38.691069415 +0000 UTC m=+2036.214145279" observedRunningTime="2026-02-03 12:38:39.88432773 +0000 UTC m=+2037.407403604" watchObservedRunningTime="2026-02-03 12:38:39.899407405 +0000 UTC m=+2037.422483269" Feb 03 12:38:49 crc kubenswrapper[4820]: I0203 12:38:49.058834 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-49a9-account-create-update-fpqm2"] Feb 03 12:38:49 crc kubenswrapper[4820]: I0203 12:38:49.076620 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-49a9-account-create-update-fpqm2"] Feb 03 12:38:49 crc kubenswrapper[4820]: I0203 12:38:49.153866 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb" path="/var/lib/kubelet/pods/eb6cc3c6-65e8-4def-9490-6d1a4a5f13eb/volumes" Feb 03 12:38:50 crc kubenswrapper[4820]: I0203 12:38:50.037637 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-6fs49"] Feb 03 12:38:50 crc kubenswrapper[4820]: I0203 12:38:50.051539 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-863d-account-create-update-sj894"] Feb 03 12:38:50 crc kubenswrapper[4820]: I0203 12:38:50.069498 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-6fs49"] Feb 03 12:38:50 crc kubenswrapper[4820]: I0203 12:38:50.081345 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-863d-account-create-update-sj894"] Feb 03 12:38:51 crc kubenswrapper[4820]: I0203 12:38:51.173391 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbdce215-5dd4-4a45-a099-ac2b51edf843" path="/var/lib/kubelet/pods/bbdce215-5dd4-4a45-a099-ac2b51edf843/volumes" Feb 03 12:38:51 crc kubenswrapper[4820]: I0203 12:38:51.175279 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e928b295-8806-4a21-aaf7-d59749562244" path="/var/lib/kubelet/pods/e928b295-8806-4a21-aaf7-d59749562244/volumes" Feb 03 12:38:52 crc kubenswrapper[4820]: I0203 12:38:52.059782 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-6g6lc"] Feb 03 12:38:52 crc kubenswrapper[4820]: I0203 12:38:52.070659 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-6g6lc"] Feb 03 12:38:53 crc kubenswrapper[4820]: I0203 12:38:53.008641 4820 generic.go:334] "Generic (PLEG): container finished" podID="d8d69bce-1404-4fce-ab56-a8d4c9f46b28" containerID="54bf82c9a842c9db3e5e1497b0f5b1769a62eee0149fe246e6ce3e0dc70138c5" exitCode=0 Feb 03 12:38:53 crc kubenswrapper[4820]: I0203 12:38:53.008768 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" event={"ID":"d8d69bce-1404-4fce-ab56-a8d4c9f46b28","Type":"ContainerDied","Data":"54bf82c9a842c9db3e5e1497b0f5b1769a62eee0149fe246e6ce3e0dc70138c5"} Feb 03 12:38:53 crc kubenswrapper[4820]: I0203 12:38:53.155327 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1336b61c-ed56-40f3-b2cd-1d476b33459b" path="/var/lib/kubelet/pods/1336b61c-ed56-40f3-b2cd-1d476b33459b/volumes" Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.036131 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" event={"ID":"d8d69bce-1404-4fce-ab56-a8d4c9f46b28","Type":"ContainerDied","Data":"9aff887094e045a46a9357ea548f387e1d1e6bab8fafabe6cf0c261101e430a2"} Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.036524 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aff887094e045a46a9357ea548f387e1d1e6bab8fafabe6cf0c261101e430a2" Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.120666 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.283357 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-ssh-key-openstack-edpm-ipam\") pod \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.283499 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcpfd\" (UniqueName: \"kubernetes.io/projected/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-kube-api-access-xcpfd\") pod \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.283572 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-repo-setup-combined-ca-bundle\") pod \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.283744 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-inventory\") pod \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\" (UID: \"d8d69bce-1404-4fce-ab56-a8d4c9f46b28\") " Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.289284 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-kube-api-access-xcpfd" (OuterVolumeSpecName: "kube-api-access-xcpfd") pod "d8d69bce-1404-4fce-ab56-a8d4c9f46b28" (UID: "d8d69bce-1404-4fce-ab56-a8d4c9f46b28"). InnerVolumeSpecName "kube-api-access-xcpfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.291097 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d8d69bce-1404-4fce-ab56-a8d4c9f46b28" (UID: "d8d69bce-1404-4fce-ab56-a8d4c9f46b28"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.316761 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d8d69bce-1404-4fce-ab56-a8d4c9f46b28" (UID: "d8d69bce-1404-4fce-ab56-a8d4c9f46b28"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.318646 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-inventory" (OuterVolumeSpecName: "inventory") pod "d8d69bce-1404-4fce-ab56-a8d4c9f46b28" (UID: "d8d69bce-1404-4fce-ab56-a8d4c9f46b28"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.389688 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.389738 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.389757 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcpfd\" (UniqueName: \"kubernetes.io/projected/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-kube-api-access-xcpfd\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:55 crc kubenswrapper[4820]: I0203 12:38:55.389768 4820 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d69bce-1404-4fce-ab56-a8d4c9f46b28-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.047516 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.253247 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr"] Feb 03 12:38:56 crc kubenswrapper[4820]: E0203 12:38:56.253951 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d69bce-1404-4fce-ab56-a8d4c9f46b28" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.253979 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d69bce-1404-4fce-ab56-a8d4c9f46b28" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.254219 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d69bce-1404-4fce-ab56-a8d4c9f46b28" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.255000 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.262963 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.263517 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.267063 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.267207 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.268152 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr"] Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.309580 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7717d9c-63f8-493f-be01-0fdea46ef053-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5gdkr\" (UID: \"a7717d9c-63f8-493f-be01-0fdea46ef053\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.309787 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7717d9c-63f8-493f-be01-0fdea46ef053-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5gdkr\" (UID: \"a7717d9c-63f8-493f-be01-0fdea46ef053\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.309818 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjklb\" (UniqueName: \"kubernetes.io/projected/a7717d9c-63f8-493f-be01-0fdea46ef053-kube-api-access-wjklb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5gdkr\" (UID: \"a7717d9c-63f8-493f-be01-0fdea46ef053\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.412015 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7717d9c-63f8-493f-be01-0fdea46ef053-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5gdkr\" (UID: \"a7717d9c-63f8-493f-be01-0fdea46ef053\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.412157 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7717d9c-63f8-493f-be01-0fdea46ef053-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5gdkr\" (UID: \"a7717d9c-63f8-493f-be01-0fdea46ef053\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.412199 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjklb\" (UniqueName: \"kubernetes.io/projected/a7717d9c-63f8-493f-be01-0fdea46ef053-kube-api-access-wjklb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5gdkr\" (UID: \"a7717d9c-63f8-493f-be01-0fdea46ef053\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.417704 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7717d9c-63f8-493f-be01-0fdea46ef053-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5gdkr\" (UID: \"a7717d9c-63f8-493f-be01-0fdea46ef053\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.419740 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7717d9c-63f8-493f-be01-0fdea46ef053-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5gdkr\" (UID: \"a7717d9c-63f8-493f-be01-0fdea46ef053\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.435070 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjklb\" (UniqueName: \"kubernetes.io/projected/a7717d9c-63f8-493f-be01-0fdea46ef053-kube-api-access-wjklb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5gdkr\" (UID: \"a7717d9c-63f8-493f-be01-0fdea46ef053\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" Feb 03 12:38:56 crc kubenswrapper[4820]: I0203 12:38:56.601226 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" Feb 03 12:38:57 crc kubenswrapper[4820]: I0203 12:38:57.198973 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr"] Feb 03 12:38:58 crc kubenswrapper[4820]: I0203 12:38:58.038608 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-shb82"] Feb 03 12:38:58 crc kubenswrapper[4820]: I0203 12:38:58.051423 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-119e-account-create-update-gctg8"] Feb 03 12:38:58 crc kubenswrapper[4820]: I0203 12:38:58.068876 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-shb82"] Feb 03 12:38:58 crc kubenswrapper[4820]: I0203 12:38:58.072450 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" event={"ID":"a7717d9c-63f8-493f-be01-0fdea46ef053","Type":"ContainerStarted","Data":"7097313d92f92970966576c8a5520e05c68a9e371d13d0b3f8f9c70cb91ebcce"} Feb 03 12:38:58 crc kubenswrapper[4820]: I0203 12:38:58.082212 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-119e-account-create-update-gctg8"] Feb 03 12:38:59 crc kubenswrapper[4820]: I0203 12:38:59.033768 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-97lpw"] Feb 03 12:38:59 crc kubenswrapper[4820]: I0203 12:38:59.045686 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ec37-account-create-update-9qk8m"] Feb 03 12:38:59 crc kubenswrapper[4820]: I0203 12:38:59.055957 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ec37-account-create-update-9qk8m"] Feb 03 12:38:59 crc kubenswrapper[4820]: I0203 12:38:59.066079 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-97lpw"] Feb 03 12:38:59 crc kubenswrapper[4820]: I0203 12:38:59.089875 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" event={"ID":"a7717d9c-63f8-493f-be01-0fdea46ef053","Type":"ContainerStarted","Data":"e7f7b843246ea7e557038ee0099f3063d31ef9aa1138e74490edbab332568469"} Feb 03 12:38:59 crc kubenswrapper[4820]: I0203 12:38:59.116391 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" podStartSLOduration=2.208349268 podStartE2EDuration="3.116367906s" podCreationTimestamp="2026-02-03 12:38:56 +0000 UTC" firstStartedPulling="2026-02-03 12:38:57.214328793 +0000 UTC m=+2054.737404657" lastFinishedPulling="2026-02-03 12:38:58.122347431 +0000 UTC m=+2055.645423295" observedRunningTime="2026-02-03 12:38:59.109183923 +0000 UTC m=+2056.632259787" watchObservedRunningTime="2026-02-03 12:38:59.116367906 +0000 UTC m=+2056.639443770" Feb 03 12:38:59 crc kubenswrapper[4820]: I0203 12:38:59.165702 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32fc4e30-d6f9-431f-a147-b54659c292f4" path="/var/lib/kubelet/pods/32fc4e30-d6f9-431f-a147-b54659c292f4/volumes" Feb 03 12:38:59 crc kubenswrapper[4820]: I0203 12:38:59.167376 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="343cdd64-3829-4d0b-bbac-d220e5442ee0" path="/var/lib/kubelet/pods/343cdd64-3829-4d0b-bbac-d220e5442ee0/volumes" Feb 03 12:38:59 crc kubenswrapper[4820]: I0203 12:38:59.168552 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab7fb74b-aa61-420d-b013-f663b159cf8b" path="/var/lib/kubelet/pods/ab7fb74b-aa61-420d-b013-f663b159cf8b/volumes" Feb 03 12:38:59 crc kubenswrapper[4820]: I0203 12:38:59.169526 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fea163e7-ea8b-4888-8634-18323a2dfc2d" path="/var/lib/kubelet/pods/fea163e7-ea8b-4888-8634-18323a2dfc2d/volumes" Feb 03 12:39:01 crc kubenswrapper[4820]: I0203 12:39:01.111702 4820 generic.go:334] "Generic (PLEG): container finished" podID="a7717d9c-63f8-493f-be01-0fdea46ef053" containerID="e7f7b843246ea7e557038ee0099f3063d31ef9aa1138e74490edbab332568469" exitCode=0 Feb 03 12:39:01 crc kubenswrapper[4820]: I0203 12:39:01.111823 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" event={"ID":"a7717d9c-63f8-493f-be01-0fdea46ef053","Type":"ContainerDied","Data":"e7f7b843246ea7e557038ee0099f3063d31ef9aa1138e74490edbab332568469"} Feb 03 12:39:01 crc kubenswrapper[4820]: I0203 12:39:01.366090 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:39:01 crc kubenswrapper[4820]: I0203 12:39:01.366212 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:39:02 crc kubenswrapper[4820]: I0203 12:39:02.566176 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" Feb 03 12:39:02 crc kubenswrapper[4820]: I0203 12:39:02.694236 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7717d9c-63f8-493f-be01-0fdea46ef053-inventory\") pod \"a7717d9c-63f8-493f-be01-0fdea46ef053\" (UID: \"a7717d9c-63f8-493f-be01-0fdea46ef053\") " Feb 03 12:39:02 crc kubenswrapper[4820]: I0203 12:39:02.694357 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjklb\" (UniqueName: \"kubernetes.io/projected/a7717d9c-63f8-493f-be01-0fdea46ef053-kube-api-access-wjklb\") pod \"a7717d9c-63f8-493f-be01-0fdea46ef053\" (UID: \"a7717d9c-63f8-493f-be01-0fdea46ef053\") " Feb 03 12:39:02 crc kubenswrapper[4820]: I0203 12:39:02.694633 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7717d9c-63f8-493f-be01-0fdea46ef053-ssh-key-openstack-edpm-ipam\") pod \"a7717d9c-63f8-493f-be01-0fdea46ef053\" (UID: \"a7717d9c-63f8-493f-be01-0fdea46ef053\") " Feb 03 12:39:02 crc kubenswrapper[4820]: I0203 12:39:02.700349 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7717d9c-63f8-493f-be01-0fdea46ef053-kube-api-access-wjklb" (OuterVolumeSpecName: "kube-api-access-wjklb") pod "a7717d9c-63f8-493f-be01-0fdea46ef053" (UID: "a7717d9c-63f8-493f-be01-0fdea46ef053"). InnerVolumeSpecName "kube-api-access-wjklb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:39:02 crc kubenswrapper[4820]: I0203 12:39:02.724732 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7717d9c-63f8-493f-be01-0fdea46ef053-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a7717d9c-63f8-493f-be01-0fdea46ef053" (UID: "a7717d9c-63f8-493f-be01-0fdea46ef053"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:39:02 crc kubenswrapper[4820]: I0203 12:39:02.725773 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7717d9c-63f8-493f-be01-0fdea46ef053-inventory" (OuterVolumeSpecName: "inventory") pod "a7717d9c-63f8-493f-be01-0fdea46ef053" (UID: "a7717d9c-63f8-493f-be01-0fdea46ef053"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:39:02 crc kubenswrapper[4820]: I0203 12:39:02.797341 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7717d9c-63f8-493f-be01-0fdea46ef053-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:02 crc kubenswrapper[4820]: I0203 12:39:02.797375 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7717d9c-63f8-493f-be01-0fdea46ef053-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:02 crc kubenswrapper[4820]: I0203 12:39:02.797385 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjklb\" (UniqueName: \"kubernetes.io/projected/a7717d9c-63f8-493f-be01-0fdea46ef053-kube-api-access-wjklb\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.134867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" event={"ID":"a7717d9c-63f8-493f-be01-0fdea46ef053","Type":"ContainerDied","Data":"7097313d92f92970966576c8a5520e05c68a9e371d13d0b3f8f9c70cb91ebcce"} Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.134935 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7097313d92f92970966576c8a5520e05c68a9e371d13d0b3f8f9c70cb91ebcce" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.134989 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5gdkr" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.227200 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl"] Feb 03 12:39:03 crc kubenswrapper[4820]: E0203 12:39:03.227855 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7717d9c-63f8-493f-be01-0fdea46ef053" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.227899 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7717d9c-63f8-493f-be01-0fdea46ef053" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.228204 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7717d9c-63f8-493f-be01-0fdea46ef053" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.229162 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.235511 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.235618 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.235855 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.235957 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.242434 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl"] Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.411002 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.411093 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6mkc\" (UniqueName: \"kubernetes.io/projected/24c4a250-4fa9-42c6-a3bd-e626d0adc807-kube-api-access-h6mkc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.411141 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.411177 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.513642 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6mkc\" (UniqueName: \"kubernetes.io/projected/24c4a250-4fa9-42c6-a3bd-e626d0adc807-kube-api-access-h6mkc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.513702 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.513749 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.513937 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.518108 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.518148 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.526820 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.531196 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6mkc\" (UniqueName: \"kubernetes.io/projected/24c4a250-4fa9-42c6-a3bd-e626d0adc807-kube-api-access-h6mkc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:39:03 crc kubenswrapper[4820]: I0203 12:39:03.554294 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:39:04 crc kubenswrapper[4820]: I0203 12:39:04.145188 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl"] Feb 03 12:39:05 crc kubenswrapper[4820]: I0203 12:39:05.170862 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" event={"ID":"24c4a250-4fa9-42c6-a3bd-e626d0adc807","Type":"ContainerStarted","Data":"347094bdb9fc3e921791b590ab9c712adf27aa1044004cf3d39ff283859b9239"} Feb 03 12:39:05 crc kubenswrapper[4820]: I0203 12:39:05.171197 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" event={"ID":"24c4a250-4fa9-42c6-a3bd-e626d0adc807","Type":"ContainerStarted","Data":"f42758c6bb854f4bae945c642fc7c5c8de2a3815e6d7dcf9d9b543f987803c0e"} Feb 03 12:39:05 crc kubenswrapper[4820]: I0203 12:39:05.193911 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" podStartSLOduration=1.78031677 podStartE2EDuration="2.193861684s" podCreationTimestamp="2026-02-03 12:39:03 +0000 UTC" firstStartedPulling="2026-02-03 12:39:04.163946191 +0000 UTC m=+2061.687022055" lastFinishedPulling="2026-02-03 12:39:04.577491105 +0000 UTC m=+2062.100566969" observedRunningTime="2026-02-03 12:39:05.189935458 +0000 UTC m=+2062.713011322" watchObservedRunningTime="2026-02-03 12:39:05.193861684 +0000 UTC m=+2062.716937548" Feb 03 12:39:16 crc kubenswrapper[4820]: I0203 12:39:16.316219 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rg9bk"] Feb 03 12:39:16 crc kubenswrapper[4820]: I0203 12:39:16.331941 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rg9bk"] Feb 03 12:39:17 crc kubenswrapper[4820]: I0203 12:39:17.315746 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ebed9e0-c26c-435a-b024-b9e768922743" path="/var/lib/kubelet/pods/8ebed9e0-c26c-435a-b024-b9e768922743/volumes" Feb 03 12:39:24 crc kubenswrapper[4820]: I0203 12:39:24.067000 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-gdddx"] Feb 03 12:39:24 crc kubenswrapper[4820]: I0203 12:39:24.079309 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-gdddx"] Feb 03 12:39:25 crc kubenswrapper[4820]: I0203 12:39:25.037301 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zb5gb"] Feb 03 12:39:25 crc kubenswrapper[4820]: I0203 12:39:25.047352 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-lhh84"] Feb 03 12:39:25 crc kubenswrapper[4820]: I0203 12:39:25.059414 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zb5gb"] Feb 03 12:39:25 crc kubenswrapper[4820]: I0203 12:39:25.069060 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-lhh84"] Feb 03 12:39:25 crc kubenswrapper[4820]: I0203 12:39:25.161553 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217d8f0c-123f-42da-b679-dbefeac99a4f" path="/var/lib/kubelet/pods/217d8f0c-123f-42da-b679-dbefeac99a4f/volumes" Feb 03 12:39:25 crc kubenswrapper[4820]: I0203 12:39:25.162513 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b750b09-8d9b-49f8-bed1-b20fa047bbc4" path="/var/lib/kubelet/pods/8b750b09-8d9b-49f8-bed1-b20fa047bbc4/volumes" Feb 03 12:39:25 crc kubenswrapper[4820]: I0203 12:39:25.163380 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e85a9b64-cf9e-4f04-9adc-2500e3f8df60" path="/var/lib/kubelet/pods/e85a9b64-cf9e-4f04-9adc-2500e3f8df60/volumes" Feb 03 12:39:26 crc kubenswrapper[4820]: I0203 12:39:26.030037 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0eaa-account-create-update-r999r"] Feb 03 12:39:26 crc kubenswrapper[4820]: I0203 12:39:26.039116 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0eaa-account-create-update-r999r"] Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.036322 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0156-account-create-update-7k4px"] Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.046430 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3a2c-account-create-update-7wqgs"] Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.059133 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3a2c-account-create-update-7wqgs"] Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.069749 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0156-account-create-update-7k4px"] Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.156567 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7242ff-a34b-4b5f-8200-026040ca1c5d" path="/var/lib/kubelet/pods/0a7242ff-a34b-4b5f-8200-026040ca1c5d/volumes" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.157469 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33378f72-8501-4ce0-bafe-a2584fd27c90" path="/var/lib/kubelet/pods/33378f72-8501-4ce0-bafe-a2584fd27c90/volumes" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.158192 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7ca31e7-36f8-449b-b4ca-fca64c76bf77" path="/var/lib/kubelet/pods/f7ca31e7-36f8-449b-b4ca-fca64c76bf77/volumes" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.247313 4820 scope.go:117] "RemoveContainer" containerID="d55ffb6b15094fbc988abed6f57d4a1ed290a6ca4b32e5599918774b9cb47431" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.280961 4820 scope.go:117] "RemoveContainer" containerID="08da51c59a4a0840a2624deec089ea2c857bcd5d44291be7cd6ad4f51bc9054c" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.342417 4820 scope.go:117] "RemoveContainer" containerID="d65ca91f700c095370cf5118669f990b2805250b56afc7b52597433b31db82d6" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.398170 4820 scope.go:117] "RemoveContainer" containerID="7da12677698e9a0b53785292061dc4075549e9c6c10f7643a560f36be07e6991" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.457723 4820 scope.go:117] "RemoveContainer" containerID="8674220873c0d60a64cf2e9ce9f44eb5bf1ab3d9ea0043e909c64e75272b07cc" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.492071 4820 scope.go:117] "RemoveContainer" containerID="60cd6834439e6ed28d66d9928c52e93817c9636b3a847b4168bd9de8b8d74bd4" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.547001 4820 scope.go:117] "RemoveContainer" containerID="6e980d3275a16e52b1522dec77770d6ff0c67de8bf3a8fe55d7ac1e0451dc9c9" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.612357 4820 scope.go:117] "RemoveContainer" containerID="4c18f9f3d5bde2a0f49e729b1cdca1a0258ceca11a7d7acf5f0d3b2546c20880" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.643171 4820 scope.go:117] "RemoveContainer" containerID="76a5d0070436fa815366c35c8fed6f14bebe1d478f6f1f7a9ead7dca3640ce85" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.674695 4820 scope.go:117] "RemoveContainer" containerID="93ead090a8653d9ab82ded32d8eb3d77fab8ab0ee9ffa5746d05c40c40fe3593" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.702543 4820 scope.go:117] "RemoveContainer" containerID="43e1b45809d7f422866afa961cd6e49b33b85adb0eb6b43b3637b1ea4e3a0d81" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.731612 4820 scope.go:117] "RemoveContainer" containerID="e569fa1a6a713eafd9673f3b2544d9e53f295e77ec2a9dea32740d6348894412" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.762696 4820 scope.go:117] "RemoveContainer" containerID="6fe5b7f0a2310a30738bfb5c2b610a72a585291d5dfce440e596fd4d2d057993" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.792602 4820 scope.go:117] "RemoveContainer" containerID="6b5221a92ea33b1d7d4489dc4f2347b465cc9b83af893aeae66736934a699433" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.815773 4820 scope.go:117] "RemoveContainer" containerID="7d40e515e2bf3126f5f536a3bf5ef5f28153e44e74e1741015b2ade2574386fb" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.838439 4820 scope.go:117] "RemoveContainer" containerID="b170f9e5a1a3a837671d75b70c9f289a68468a57e434be4bccc0cfa39c9c916b" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.868082 4820 scope.go:117] "RemoveContainer" containerID="b192be8b6941658dab09f8ef7e7430547416dbd8d6c44b705738f8bfe4a09bf1" Feb 03 12:39:27 crc kubenswrapper[4820]: I0203 12:39:27.891821 4820 scope.go:117] "RemoveContainer" containerID="55a86545657197ec8f90a7dbf82f8fdc4bfeb41ed556ff8fd366c49474254679" Feb 03 12:39:31 crc kubenswrapper[4820]: I0203 12:39:31.365813 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:39:31 crc kubenswrapper[4820]: I0203 12:39:31.366269 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:39:42 crc kubenswrapper[4820]: I0203 12:39:42.040779 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-zlksq"] Feb 03 12:39:42 crc kubenswrapper[4820]: I0203 12:39:42.053361 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-zlksq"] Feb 03 12:39:43 crc kubenswrapper[4820]: I0203 12:39:43.174649 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b897af0d-2b67-45c6-b17f-3686d5a419c0" path="/var/lib/kubelet/pods/b897af0d-2b67-45c6-b17f-3686d5a419c0/volumes" Feb 03 12:40:01 crc kubenswrapper[4820]: I0203 12:40:01.365851 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:40:01 crc kubenswrapper[4820]: I0203 12:40:01.366406 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:40:01 crc kubenswrapper[4820]: I0203 12:40:01.366470 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:40:01 crc kubenswrapper[4820]: I0203 12:40:01.367486 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f34688920b0d8f1ba8313bfd5660e1745625b1ce2d5d457facdc7ba2bbd910d"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:40:01 crc kubenswrapper[4820]: I0203 12:40:01.367563 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://8f34688920b0d8f1ba8313bfd5660e1745625b1ce2d5d457facdc7ba2bbd910d" gracePeriod=600 Feb 03 12:40:02 crc kubenswrapper[4820]: I0203 12:40:02.338434 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="8f34688920b0d8f1ba8313bfd5660e1745625b1ce2d5d457facdc7ba2bbd910d" exitCode=0 Feb 03 12:40:02 crc kubenswrapper[4820]: I0203 12:40:02.338521 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"8f34688920b0d8f1ba8313bfd5660e1745625b1ce2d5d457facdc7ba2bbd910d"} Feb 03 12:40:02 crc kubenswrapper[4820]: I0203 12:40:02.338974 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77"} Feb 03 12:40:02 crc kubenswrapper[4820]: I0203 12:40:02.339021 4820 scope.go:117] "RemoveContainer" containerID="3cd22393026f08f47789b9666f306edb2325c4bd0fbfd405ee1636bfdfa661d3" Feb 03 12:40:09 crc kubenswrapper[4820]: I0203 12:40:09.057628 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-gvhr2"] Feb 03 12:40:09 crc kubenswrapper[4820]: I0203 12:40:09.067439 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-gvhr2"] Feb 03 12:40:09 crc kubenswrapper[4820]: I0203 12:40:09.156519 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66f26be7-edcc-4d55-b7e4-d5f4d16cf58e" path="/var/lib/kubelet/pods/66f26be7-edcc-4d55-b7e4-d5f4d16cf58e/volumes" Feb 03 12:40:20 crc kubenswrapper[4820]: I0203 12:40:20.038274 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-g8wq4"] Feb 03 12:40:20 crc kubenswrapper[4820]: I0203 12:40:20.051353 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-g8wq4"] Feb 03 12:40:21 crc kubenswrapper[4820]: I0203 12:40:21.157038 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b594ebbd-4a60-46ca-92f6-0e4869499849" path="/var/lib/kubelet/pods/b594ebbd-4a60-46ca-92f6-0e4869499849/volumes" Feb 03 12:40:28 crc kubenswrapper[4820]: I0203 12:40:28.259609 4820 scope.go:117] "RemoveContainer" containerID="d3fee94a4fab8c8fed28cce3e70b696c0512dd6b3cb9216afa62e9ed717bb306" Feb 03 12:40:28 crc kubenswrapper[4820]: I0203 12:40:28.309225 4820 scope.go:117] "RemoveContainer" containerID="f949cbac1cb513adc0dfabe66243b81171539e8a232158bf45721e44acbbb55a" Feb 03 12:40:28 crc kubenswrapper[4820]: I0203 12:40:28.352497 4820 scope.go:117] "RemoveContainer" containerID="a2a2b1cfcfc6537c32cd1a307233a174652c5c647ea10b719894c0502b78b49d" Feb 03 12:40:28 crc kubenswrapper[4820]: I0203 12:40:28.375961 4820 scope.go:117] "RemoveContainer" containerID="ebce3229a465f44af1a54bb582fc434d596fd8f0c376b0bba7fccd7dea94fc98" Feb 03 12:40:28 crc kubenswrapper[4820]: I0203 12:40:28.401923 4820 scope.go:117] "RemoveContainer" containerID="25fe8caa2ebf0b88444055d54c8b1bbf17afd4176480ee015377919752186d34" Feb 03 12:40:28 crc kubenswrapper[4820]: I0203 12:40:28.425847 4820 scope.go:117] "RemoveContainer" containerID="4d846a574f926a8cb91628cc3125d07e9c8f1a3178d0302150efc81f28ba7de0" Feb 03 12:40:28 crc kubenswrapper[4820]: I0203 12:40:28.782337 4820 scope.go:117] "RemoveContainer" containerID="8df804dfd8e904c3d0861dd203d9e73de473141a0420f782c3aa592df09a484d" Feb 03 12:41:12 crc kubenswrapper[4820]: I0203 12:41:12.087667 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-lb2jr"] Feb 03 12:41:12 crc kubenswrapper[4820]: I0203 12:41:12.097265 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-lb2jr"] Feb 03 12:41:13 crc kubenswrapper[4820]: I0203 12:41:13.159734 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb" path="/var/lib/kubelet/pods/06a34b04-0b0b-41bc-bfa4-17ef6e1fbddb/volumes" Feb 03 12:41:29 crc kubenswrapper[4820]: I0203 12:41:29.042920 4820 scope.go:117] "RemoveContainer" containerID="465ff03b2ca70fbdd3d10d95bd4c4be128cc1c465b3b4a0fad006b81b4bd36be" Feb 03 12:41:29 crc kubenswrapper[4820]: I0203 12:41:29.073946 4820 scope.go:117] "RemoveContainer" containerID="2593d50af9746ad6d6d1a970a01c1509755c275f1991b4dec341cd3a990e342f" Feb 03 12:41:29 crc kubenswrapper[4820]: I0203 12:41:29.096328 4820 scope.go:117] "RemoveContainer" containerID="f4fc507cd388efa2a49573fdcbfa7bf757e12ec2c473b2be28ae886e813ba750" Feb 03 12:41:29 crc kubenswrapper[4820]: I0203 12:41:29.115710 4820 scope.go:117] "RemoveContainer" containerID="0b1773f7ae32ea07b74ba0043ee70a6621c9a349fe4ccc6d5db6db768e0be7fb" Feb 03 12:41:29 crc kubenswrapper[4820]: I0203 12:41:29.137487 4820 scope.go:117] "RemoveContainer" containerID="34d519c618eba6d21f8bdc59e5fbc6e2f30a0da9b52b4c66f1dcbedd3137aa91" Feb 03 12:41:44 crc kubenswrapper[4820]: I0203 12:41:44.059155 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xsjm7"] Feb 03 12:41:44 crc kubenswrapper[4820]: I0203 12:41:44.070297 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-t4pzw"] Feb 03 12:41:44 crc kubenswrapper[4820]: I0203 12:41:44.081853 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xsjm7"] Feb 03 12:41:44 crc kubenswrapper[4820]: I0203 12:41:44.092761 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-t4pzw"] Feb 03 12:41:45 crc kubenswrapper[4820]: I0203 12:41:45.156474 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6da87e1-3451-48c6-b2ad-368bf3139a57" path="/var/lib/kubelet/pods/d6da87e1-3451-48c6-b2ad-368bf3139a57/volumes" Feb 03 12:41:45 crc kubenswrapper[4820]: I0203 12:41:45.157311 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4116aff-b63f-47f1-b4bd-5bde84226d87" path="/var/lib/kubelet/pods/f4116aff-b63f-47f1-b4bd-5bde84226d87/volumes" Feb 03 12:41:57 crc kubenswrapper[4820]: I0203 12:41:57.050262 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-9csj4"] Feb 03 12:41:57 crc kubenswrapper[4820]: I0203 12:41:57.061508 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-9csj4"] Feb 03 12:41:57 crc kubenswrapper[4820]: I0203 12:41:57.156737 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="470b8f27-2959-4890-aed3-361530b83b73" path="/var/lib/kubelet/pods/470b8f27-2959-4890-aed3-361530b83b73/volumes" Feb 03 12:42:01 crc kubenswrapper[4820]: I0203 12:42:01.365350 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:42:01 crc kubenswrapper[4820]: I0203 12:42:01.366014 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:42:07 crc kubenswrapper[4820]: I0203 12:42:07.035716 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-b4rms"] Feb 03 12:42:07 crc kubenswrapper[4820]: I0203 12:42:07.044811 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-b4rms"] Feb 03 12:42:07 crc kubenswrapper[4820]: I0203 12:42:07.154705 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0" path="/var/lib/kubelet/pods/4b8f1b47-829d-4da3-a6a9-b73bbe7d20c0/volumes" Feb 03 12:42:21 crc kubenswrapper[4820]: I0203 12:42:21.059046 4820 generic.go:334] "Generic (PLEG): container finished" podID="24c4a250-4fa9-42c6-a3bd-e626d0adc807" containerID="347094bdb9fc3e921791b590ab9c712adf27aa1044004cf3d39ff283859b9239" exitCode=0 Feb 03 12:42:21 crc kubenswrapper[4820]: I0203 12:42:21.059145 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" event={"ID":"24c4a250-4fa9-42c6-a3bd-e626d0adc807","Type":"ContainerDied","Data":"347094bdb9fc3e921791b590ab9c712adf27aa1044004cf3d39ff283859b9239"} Feb 03 12:42:22 crc kubenswrapper[4820]: I0203 12:42:22.521082 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:42:22 crc kubenswrapper[4820]: I0203 12:42:22.627625 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6mkc\" (UniqueName: \"kubernetes.io/projected/24c4a250-4fa9-42c6-a3bd-e626d0adc807-kube-api-access-h6mkc\") pod \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " Feb 03 12:42:22 crc kubenswrapper[4820]: I0203 12:42:22.627797 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-inventory\") pod \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " Feb 03 12:42:22 crc kubenswrapper[4820]: I0203 12:42:22.627950 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-ssh-key-openstack-edpm-ipam\") pod \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " Feb 03 12:42:22 crc kubenswrapper[4820]: I0203 12:42:22.628127 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-bootstrap-combined-ca-bundle\") pod \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\" (UID: \"24c4a250-4fa9-42c6-a3bd-e626d0adc807\") " Feb 03 12:42:22 crc kubenswrapper[4820]: I0203 12:42:22.636177 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24c4a250-4fa9-42c6-a3bd-e626d0adc807-kube-api-access-h6mkc" (OuterVolumeSpecName: "kube-api-access-h6mkc") pod "24c4a250-4fa9-42c6-a3bd-e626d0adc807" (UID: "24c4a250-4fa9-42c6-a3bd-e626d0adc807"). InnerVolumeSpecName "kube-api-access-h6mkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:42:22 crc kubenswrapper[4820]: I0203 12:42:22.637125 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "24c4a250-4fa9-42c6-a3bd-e626d0adc807" (UID: "24c4a250-4fa9-42c6-a3bd-e626d0adc807"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:42:22 crc kubenswrapper[4820]: I0203 12:42:22.661622 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "24c4a250-4fa9-42c6-a3bd-e626d0adc807" (UID: "24c4a250-4fa9-42c6-a3bd-e626d0adc807"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:42:22 crc kubenswrapper[4820]: I0203 12:42:22.663648 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-inventory" (OuterVolumeSpecName: "inventory") pod "24c4a250-4fa9-42c6-a3bd-e626d0adc807" (UID: "24c4a250-4fa9-42c6-a3bd-e626d0adc807"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:42:22 crc kubenswrapper[4820]: I0203 12:42:22.731453 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:42:22 crc kubenswrapper[4820]: I0203 12:42:22.731495 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:42:22 crc kubenswrapper[4820]: I0203 12:42:22.731507 4820 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c4a250-4fa9-42c6-a3bd-e626d0adc807-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:42:22 crc kubenswrapper[4820]: I0203 12:42:22.731518 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6mkc\" (UniqueName: \"kubernetes.io/projected/24c4a250-4fa9-42c6-a3bd-e626d0adc807-kube-api-access-h6mkc\") on node \"crc\" DevicePath \"\"" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.084169 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" event={"ID":"24c4a250-4fa9-42c6-a3bd-e626d0adc807","Type":"ContainerDied","Data":"f42758c6bb854f4bae945c642fc7c5c8de2a3815e6d7dcf9d9b543f987803c0e"} Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.084238 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f42758c6bb854f4bae945c642fc7c5c8de2a3815e6d7dcf9d9b543f987803c0e" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.084319 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.199766 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7"] Feb 03 12:42:23 crc kubenswrapper[4820]: E0203 12:42:23.200361 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c4a250-4fa9-42c6-a3bd-e626d0adc807" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.200407 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c4a250-4fa9-42c6-a3bd-e626d0adc807" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.200753 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c4a250-4fa9-42c6-a3bd-e626d0adc807" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.202005 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.204004 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.204033 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.204405 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.204769 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.221409 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7"] Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.253689 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpk7c\" (UniqueName: \"kubernetes.io/projected/c7b75829-d001-4e04-9850-44e986677f48-kube-api-access-dpk7c\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7\" (UID: \"c7b75829-d001-4e04-9850-44e986677f48\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.254068 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7b75829-d001-4e04-9850-44e986677f48-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7\" (UID: \"c7b75829-d001-4e04-9850-44e986677f48\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.255395 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7b75829-d001-4e04-9850-44e986677f48-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7\" (UID: \"c7b75829-d001-4e04-9850-44e986677f48\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.357214 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpk7c\" (UniqueName: \"kubernetes.io/projected/c7b75829-d001-4e04-9850-44e986677f48-kube-api-access-dpk7c\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7\" (UID: \"c7b75829-d001-4e04-9850-44e986677f48\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.357591 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7b75829-d001-4e04-9850-44e986677f48-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7\" (UID: \"c7b75829-d001-4e04-9850-44e986677f48\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.357794 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7b75829-d001-4e04-9850-44e986677f48-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7\" (UID: \"c7b75829-d001-4e04-9850-44e986677f48\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.364750 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7b75829-d001-4e04-9850-44e986677f48-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7\" (UID: \"c7b75829-d001-4e04-9850-44e986677f48\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.365831 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7b75829-d001-4e04-9850-44e986677f48-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7\" (UID: \"c7b75829-d001-4e04-9850-44e986677f48\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.384771 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpk7c\" (UniqueName: \"kubernetes.io/projected/c7b75829-d001-4e04-9850-44e986677f48-kube-api-access-dpk7c\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7\" (UID: \"c7b75829-d001-4e04-9850-44e986677f48\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" Feb 03 12:42:23 crc kubenswrapper[4820]: I0203 12:42:23.523242 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" Feb 03 12:42:24 crc kubenswrapper[4820]: I0203 12:42:24.054157 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7"] Feb 03 12:42:24 crc kubenswrapper[4820]: I0203 12:42:24.060718 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:42:24 crc kubenswrapper[4820]: I0203 12:42:24.101643 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" event={"ID":"c7b75829-d001-4e04-9850-44e986677f48","Type":"ContainerStarted","Data":"73ee6548b5eba8f61263379a332b2be1e24a5b9225cdee813887ab46c2e63019"} Feb 03 12:42:25 crc kubenswrapper[4820]: I0203 12:42:25.112740 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" event={"ID":"c7b75829-d001-4e04-9850-44e986677f48","Type":"ContainerStarted","Data":"45f8aee4cf498534c4e0f242f562ba87435d11ef6a16513da50789336ac6f8e0"} Feb 03 12:42:25 crc kubenswrapper[4820]: I0203 12:42:25.134579 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" podStartSLOduration=1.5687744829999999 podStartE2EDuration="2.134531326s" podCreationTimestamp="2026-02-03 12:42:23 +0000 UTC" firstStartedPulling="2026-02-03 12:42:24.060299953 +0000 UTC m=+2261.583375827" lastFinishedPulling="2026-02-03 12:42:24.626056806 +0000 UTC m=+2262.149132670" observedRunningTime="2026-02-03 12:42:25.129464159 +0000 UTC m=+2262.652540043" watchObservedRunningTime="2026-02-03 12:42:25.134531326 +0000 UTC m=+2262.657607210" Feb 03 12:42:29 crc kubenswrapper[4820]: I0203 12:42:29.259988 4820 scope.go:117] "RemoveContainer" containerID="6af213d8f362ef87845889f66d36bd8215aa0184e66434cb42d8b8540181c65b" Feb 03 12:42:29 crc kubenswrapper[4820]: I0203 12:42:29.305825 4820 scope.go:117] "RemoveContainer" containerID="f74d0b94426904787f61655b3b50e75153fd10c33f2fb6331a01e7bb2c173b9c" Feb 03 12:42:29 crc kubenswrapper[4820]: I0203 12:42:29.374347 4820 scope.go:117] "RemoveContainer" containerID="c449ebc96060118b22140617d1169269446fc93d45e0810fa81e53cb1c180aea" Feb 03 12:42:29 crc kubenswrapper[4820]: I0203 12:42:29.399390 4820 scope.go:117] "RemoveContainer" containerID="10efacc21caf698fb5a3a65a239aca041e4d8cd493b7d1a84de2d1c346e3e9a8" Feb 03 12:42:29 crc kubenswrapper[4820]: I0203 12:42:29.457868 4820 scope.go:117] "RemoveContainer" containerID="b9c654f89c5faf8645b86bb21d48eed1f5fc4a23ad64e36037645d99f54d6462" Feb 03 12:42:29 crc kubenswrapper[4820]: I0203 12:42:29.489000 4820 scope.go:117] "RemoveContainer" containerID="d9c184037d477a29fe6fd8c82acb14ac963207c270f2fa51dff1d8c2fbd30627" Feb 03 12:42:31 crc kubenswrapper[4820]: I0203 12:42:31.365948 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:42:31 crc kubenswrapper[4820]: I0203 12:42:31.366223 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:42:41 crc kubenswrapper[4820]: I0203 12:42:41.095546 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-428hj"] Feb 03 12:42:41 crc kubenswrapper[4820]: I0203 12:42:41.099015 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:41 crc kubenswrapper[4820]: I0203 12:42:41.105219 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-428hj"] Feb 03 12:42:41 crc kubenswrapper[4820]: I0203 12:42:41.257809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e26919-4005-4ccc-b6e0-b93824b5bb3a-catalog-content\") pod \"community-operators-428hj\" (UID: \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\") " pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:41 crc kubenswrapper[4820]: I0203 12:42:41.257963 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e26919-4005-4ccc-b6e0-b93824b5bb3a-utilities\") pod \"community-operators-428hj\" (UID: \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\") " pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:41 crc kubenswrapper[4820]: I0203 12:42:41.258193 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbs7d\" (UniqueName: \"kubernetes.io/projected/03e26919-4005-4ccc-b6e0-b93824b5bb3a-kube-api-access-mbs7d\") pod \"community-operators-428hj\" (UID: \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\") " pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:41 crc kubenswrapper[4820]: I0203 12:42:41.360485 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbs7d\" (UniqueName: \"kubernetes.io/projected/03e26919-4005-4ccc-b6e0-b93824b5bb3a-kube-api-access-mbs7d\") pod \"community-operators-428hj\" (UID: \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\") " pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:41 crc kubenswrapper[4820]: I0203 12:42:41.361079 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e26919-4005-4ccc-b6e0-b93824b5bb3a-catalog-content\") pod \"community-operators-428hj\" (UID: \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\") " pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:41 crc kubenswrapper[4820]: I0203 12:42:41.361146 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e26919-4005-4ccc-b6e0-b93824b5bb3a-utilities\") pod \"community-operators-428hj\" (UID: \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\") " pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:41 crc kubenswrapper[4820]: I0203 12:42:41.361637 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e26919-4005-4ccc-b6e0-b93824b5bb3a-utilities\") pod \"community-operators-428hj\" (UID: \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\") " pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:41 crc kubenswrapper[4820]: I0203 12:42:41.362711 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e26919-4005-4ccc-b6e0-b93824b5bb3a-catalog-content\") pod \"community-operators-428hj\" (UID: \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\") " pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:41 crc kubenswrapper[4820]: I0203 12:42:41.383743 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbs7d\" (UniqueName: \"kubernetes.io/projected/03e26919-4005-4ccc-b6e0-b93824b5bb3a-kube-api-access-mbs7d\") pod \"community-operators-428hj\" (UID: \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\") " pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:41 crc kubenswrapper[4820]: I0203 12:42:41.420137 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:41 crc kubenswrapper[4820]: I0203 12:42:41.990867 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-428hj"] Feb 03 12:42:42 crc kubenswrapper[4820]: I0203 12:42:42.275775 4820 generic.go:334] "Generic (PLEG): container finished" podID="03e26919-4005-4ccc-b6e0-b93824b5bb3a" containerID="fd72edbf6767a4370b293bb0f139568b0d0e20c1ffe3e9bed01f974308d454f1" exitCode=0 Feb 03 12:42:42 crc kubenswrapper[4820]: I0203 12:42:42.276063 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-428hj" event={"ID":"03e26919-4005-4ccc-b6e0-b93824b5bb3a","Type":"ContainerDied","Data":"fd72edbf6767a4370b293bb0f139568b0d0e20c1ffe3e9bed01f974308d454f1"} Feb 03 12:42:42 crc kubenswrapper[4820]: I0203 12:42:42.276191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-428hj" event={"ID":"03e26919-4005-4ccc-b6e0-b93824b5bb3a","Type":"ContainerStarted","Data":"0b0177e6459181ab5aafb86a75b97664456335907283042f7033643fcc360e67"} Feb 03 12:42:43 crc kubenswrapper[4820]: I0203 12:42:43.290980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-428hj" event={"ID":"03e26919-4005-4ccc-b6e0-b93824b5bb3a","Type":"ContainerStarted","Data":"e18b947ea84d453c5336699d75c704760e6fddedc99e507cb2b82a1bcb1d2344"} Feb 03 12:42:44 crc kubenswrapper[4820]: I0203 12:42:44.302628 4820 generic.go:334] "Generic (PLEG): container finished" podID="03e26919-4005-4ccc-b6e0-b93824b5bb3a" containerID="e18b947ea84d453c5336699d75c704760e6fddedc99e507cb2b82a1bcb1d2344" exitCode=0 Feb 03 12:42:44 crc kubenswrapper[4820]: I0203 12:42:44.302693 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-428hj" event={"ID":"03e26919-4005-4ccc-b6e0-b93824b5bb3a","Type":"ContainerDied","Data":"e18b947ea84d453c5336699d75c704760e6fddedc99e507cb2b82a1bcb1d2344"} Feb 03 12:42:44 crc kubenswrapper[4820]: I0203 12:42:44.686279 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6rh9w"] Feb 03 12:42:44 crc kubenswrapper[4820]: I0203 12:42:44.689019 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:42:44 crc kubenswrapper[4820]: I0203 12:42:44.715993 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6rh9w"] Feb 03 12:42:44 crc kubenswrapper[4820]: I0203 12:42:44.855111 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ccef75a-47bb-41de-9476-130d4cc20f53-utilities\") pod \"redhat-operators-6rh9w\" (UID: \"0ccef75a-47bb-41de-9476-130d4cc20f53\") " pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:42:44 crc kubenswrapper[4820]: I0203 12:42:44.855177 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ccef75a-47bb-41de-9476-130d4cc20f53-catalog-content\") pod \"redhat-operators-6rh9w\" (UID: \"0ccef75a-47bb-41de-9476-130d4cc20f53\") " pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:42:44 crc kubenswrapper[4820]: I0203 12:42:44.855219 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdd5d\" (UniqueName: \"kubernetes.io/projected/0ccef75a-47bb-41de-9476-130d4cc20f53-kube-api-access-mdd5d\") pod \"redhat-operators-6rh9w\" (UID: \"0ccef75a-47bb-41de-9476-130d4cc20f53\") " pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:42:44 crc kubenswrapper[4820]: I0203 12:42:44.957295 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ccef75a-47bb-41de-9476-130d4cc20f53-utilities\") pod \"redhat-operators-6rh9w\" (UID: \"0ccef75a-47bb-41de-9476-130d4cc20f53\") " pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:42:44 crc kubenswrapper[4820]: I0203 12:42:44.957375 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ccef75a-47bb-41de-9476-130d4cc20f53-catalog-content\") pod \"redhat-operators-6rh9w\" (UID: \"0ccef75a-47bb-41de-9476-130d4cc20f53\") " pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:42:44 crc kubenswrapper[4820]: I0203 12:42:44.957434 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdd5d\" (UniqueName: \"kubernetes.io/projected/0ccef75a-47bb-41de-9476-130d4cc20f53-kube-api-access-mdd5d\") pod \"redhat-operators-6rh9w\" (UID: \"0ccef75a-47bb-41de-9476-130d4cc20f53\") " pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:42:44 crc kubenswrapper[4820]: I0203 12:42:44.958180 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ccef75a-47bb-41de-9476-130d4cc20f53-utilities\") pod \"redhat-operators-6rh9w\" (UID: \"0ccef75a-47bb-41de-9476-130d4cc20f53\") " pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:42:44 crc kubenswrapper[4820]: I0203 12:42:44.958476 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ccef75a-47bb-41de-9476-130d4cc20f53-catalog-content\") pod \"redhat-operators-6rh9w\" (UID: \"0ccef75a-47bb-41de-9476-130d4cc20f53\") " pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:42:44 crc kubenswrapper[4820]: I0203 12:42:44.991980 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdd5d\" (UniqueName: \"kubernetes.io/projected/0ccef75a-47bb-41de-9476-130d4cc20f53-kube-api-access-mdd5d\") pod \"redhat-operators-6rh9w\" (UID: \"0ccef75a-47bb-41de-9476-130d4cc20f53\") " pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:42:45 crc kubenswrapper[4820]: I0203 12:42:45.015593 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:42:45 crc kubenswrapper[4820]: I0203 12:42:45.319614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-428hj" event={"ID":"03e26919-4005-4ccc-b6e0-b93824b5bb3a","Type":"ContainerStarted","Data":"2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed"} Feb 03 12:42:45 crc kubenswrapper[4820]: I0203 12:42:45.345943 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-428hj" podStartSLOduration=1.9063607930000002 podStartE2EDuration="4.345922726s" podCreationTimestamp="2026-02-03 12:42:41 +0000 UTC" firstStartedPulling="2026-02-03 12:42:42.277875342 +0000 UTC m=+2279.800951206" lastFinishedPulling="2026-02-03 12:42:44.717437285 +0000 UTC m=+2282.240513139" observedRunningTime="2026-02-03 12:42:45.340215103 +0000 UTC m=+2282.863290977" watchObservedRunningTime="2026-02-03 12:42:45.345922726 +0000 UTC m=+2282.868998590" Feb 03 12:42:45 crc kubenswrapper[4820]: I0203 12:42:45.523113 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6rh9w"] Feb 03 12:42:46 crc kubenswrapper[4820]: I0203 12:42:46.331959 4820 generic.go:334] "Generic (PLEG): container finished" podID="0ccef75a-47bb-41de-9476-130d4cc20f53" containerID="ba47d1a722d31eaafad816f29f32d05932d657548b4d7ea353a553ae62a0699c" exitCode=0 Feb 03 12:42:46 crc kubenswrapper[4820]: I0203 12:42:46.332176 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rh9w" event={"ID":"0ccef75a-47bb-41de-9476-130d4cc20f53","Type":"ContainerDied","Data":"ba47d1a722d31eaafad816f29f32d05932d657548b4d7ea353a553ae62a0699c"} Feb 03 12:42:46 crc kubenswrapper[4820]: I0203 12:42:46.333485 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rh9w" event={"ID":"0ccef75a-47bb-41de-9476-130d4cc20f53","Type":"ContainerStarted","Data":"5fb9bca521fd66970fbad28269a4197646812fc421bdbdb7e21a3a9d4dcc6b24"} Feb 03 12:42:47 crc kubenswrapper[4820]: I0203 12:42:47.344345 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rh9w" event={"ID":"0ccef75a-47bb-41de-9476-130d4cc20f53","Type":"ContainerStarted","Data":"9792b3cfd960048de08b8dd62ec1b8421c7220ccbb2d5ab9711b2e64a032cf62"} Feb 03 12:42:50 crc kubenswrapper[4820]: I0203 12:42:50.377053 4820 generic.go:334] "Generic (PLEG): container finished" podID="0ccef75a-47bb-41de-9476-130d4cc20f53" containerID="9792b3cfd960048de08b8dd62ec1b8421c7220ccbb2d5ab9711b2e64a032cf62" exitCode=0 Feb 03 12:42:50 crc kubenswrapper[4820]: I0203 12:42:50.377100 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rh9w" event={"ID":"0ccef75a-47bb-41de-9476-130d4cc20f53","Type":"ContainerDied","Data":"9792b3cfd960048de08b8dd62ec1b8421c7220ccbb2d5ab9711b2e64a032cf62"} Feb 03 12:42:51 crc kubenswrapper[4820]: I0203 12:42:51.399599 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rh9w" event={"ID":"0ccef75a-47bb-41de-9476-130d4cc20f53","Type":"ContainerStarted","Data":"dacf7ba131127fb5cc893d97e055f0d423d479705e4e39ade4153aaed5340669"} Feb 03 12:42:51 crc kubenswrapper[4820]: I0203 12:42:51.420652 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:51 crc kubenswrapper[4820]: I0203 12:42:51.420703 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:51 crc kubenswrapper[4820]: I0203 12:42:51.424728 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6rh9w" podStartSLOduration=2.894020779 podStartE2EDuration="7.424714085s" podCreationTimestamp="2026-02-03 12:42:44 +0000 UTC" firstStartedPulling="2026-02-03 12:42:46.334997247 +0000 UTC m=+2283.858073111" lastFinishedPulling="2026-02-03 12:42:50.865690553 +0000 UTC m=+2288.388766417" observedRunningTime="2026-02-03 12:42:51.42267354 +0000 UTC m=+2288.945749414" watchObservedRunningTime="2026-02-03 12:42:51.424714085 +0000 UTC m=+2288.947789949" Feb 03 12:42:51 crc kubenswrapper[4820]: I0203 12:42:51.476485 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:52 crc kubenswrapper[4820]: I0203 12:42:52.466717 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:53 crc kubenswrapper[4820]: I0203 12:42:53.291658 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-428hj"] Feb 03 12:42:55 crc kubenswrapper[4820]: I0203 12:42:55.016487 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:42:55 crc kubenswrapper[4820]: I0203 12:42:55.016864 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:42:55 crc kubenswrapper[4820]: I0203 12:42:55.183752 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-428hj" podUID="03e26919-4005-4ccc-b6e0-b93824b5bb3a" containerName="registry-server" containerID="cri-o://2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed" gracePeriod=2 Feb 03 12:42:55 crc kubenswrapper[4820]: E0203 12:42:55.576720 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03e26919_4005_4ccc_b6e0_b93824b5bb3a.slice/crio-2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03e26919_4005_4ccc_b6e0_b93824b5bb3a.slice/crio-conmon-2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed.scope\": RecentStats: unable to find data in memory cache]" Feb 03 12:42:55 crc kubenswrapper[4820]: I0203 12:42:55.855770 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.005655 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e26919-4005-4ccc-b6e0-b93824b5bb3a-catalog-content\") pod \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\" (UID: \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\") " Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.005708 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e26919-4005-4ccc-b6e0-b93824b5bb3a-utilities\") pod \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\" (UID: \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\") " Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.005822 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbs7d\" (UniqueName: \"kubernetes.io/projected/03e26919-4005-4ccc-b6e0-b93824b5bb3a-kube-api-access-mbs7d\") pod \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\" (UID: \"03e26919-4005-4ccc-b6e0-b93824b5bb3a\") " Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.006543 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e26919-4005-4ccc-b6e0-b93824b5bb3a-utilities" (OuterVolumeSpecName: "utilities") pod "03e26919-4005-4ccc-b6e0-b93824b5bb3a" (UID: "03e26919-4005-4ccc-b6e0-b93824b5bb3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.013015 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e26919-4005-4ccc-b6e0-b93824b5bb3a-kube-api-access-mbs7d" (OuterVolumeSpecName: "kube-api-access-mbs7d") pod "03e26919-4005-4ccc-b6e0-b93824b5bb3a" (UID: "03e26919-4005-4ccc-b6e0-b93824b5bb3a"). InnerVolumeSpecName "kube-api-access-mbs7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.079699 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e26919-4005-4ccc-b6e0-b93824b5bb3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03e26919-4005-4ccc-b6e0-b93824b5bb3a" (UID: "03e26919-4005-4ccc-b6e0-b93824b5bb3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.108856 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbs7d\" (UniqueName: \"kubernetes.io/projected/03e26919-4005-4ccc-b6e0-b93824b5bb3a-kube-api-access-mbs7d\") on node \"crc\" DevicePath \"\"" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.108908 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e26919-4005-4ccc-b6e0-b93824b5bb3a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.108918 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e26919-4005-4ccc-b6e0-b93824b5bb3a-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.194454 4820 generic.go:334] "Generic (PLEG): container finished" podID="03e26919-4005-4ccc-b6e0-b93824b5bb3a" containerID="2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed" exitCode=0 Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.194501 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-428hj" event={"ID":"03e26919-4005-4ccc-b6e0-b93824b5bb3a","Type":"ContainerDied","Data":"2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed"} Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.194520 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-428hj" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.194528 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-428hj" event={"ID":"03e26919-4005-4ccc-b6e0-b93824b5bb3a","Type":"ContainerDied","Data":"0b0177e6459181ab5aafb86a75b97664456335907283042f7033643fcc360e67"} Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.194546 4820 scope.go:117] "RemoveContainer" containerID="2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.215664 4820 scope.go:117] "RemoveContainer" containerID="e18b947ea84d453c5336699d75c704760e6fddedc99e507cb2b82a1bcb1d2344" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.240170 4820 scope.go:117] "RemoveContainer" containerID="fd72edbf6767a4370b293bb0f139568b0d0e20c1ffe3e9bed01f974308d454f1" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.250094 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-428hj"] Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.258586 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-428hj"] Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.258581 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6rh9w" podUID="0ccef75a-47bb-41de-9476-130d4cc20f53" containerName="registry-server" probeResult="failure" output=< Feb 03 12:42:56 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:42:56 crc kubenswrapper[4820]: > Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.292107 4820 scope.go:117] "RemoveContainer" containerID="2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed" Feb 03 12:42:56 crc kubenswrapper[4820]: E0203 12:42:56.292782 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed\": container with ID starting with 2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed not found: ID does not exist" containerID="2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.292835 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed"} err="failed to get container status \"2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed\": rpc error: code = NotFound desc = could not find container \"2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed\": container with ID starting with 2dc55aca1fb8c5ef237f955de124b8c151359096c6caea7cce22a07bde97c2ed not found: ID does not exist" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.292867 4820 scope.go:117] "RemoveContainer" containerID="e18b947ea84d453c5336699d75c704760e6fddedc99e507cb2b82a1bcb1d2344" Feb 03 12:42:56 crc kubenswrapper[4820]: E0203 12:42:56.293343 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e18b947ea84d453c5336699d75c704760e6fddedc99e507cb2b82a1bcb1d2344\": container with ID starting with e18b947ea84d453c5336699d75c704760e6fddedc99e507cb2b82a1bcb1d2344 not found: ID does not exist" containerID="e18b947ea84d453c5336699d75c704760e6fddedc99e507cb2b82a1bcb1d2344" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.293398 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e18b947ea84d453c5336699d75c704760e6fddedc99e507cb2b82a1bcb1d2344"} err="failed to get container status \"e18b947ea84d453c5336699d75c704760e6fddedc99e507cb2b82a1bcb1d2344\": rpc error: code = NotFound desc = could not find container \"e18b947ea84d453c5336699d75c704760e6fddedc99e507cb2b82a1bcb1d2344\": container with ID starting with e18b947ea84d453c5336699d75c704760e6fddedc99e507cb2b82a1bcb1d2344 not found: ID does not exist" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.293415 4820 scope.go:117] "RemoveContainer" containerID="fd72edbf6767a4370b293bb0f139568b0d0e20c1ffe3e9bed01f974308d454f1" Feb 03 12:42:56 crc kubenswrapper[4820]: E0203 12:42:56.293717 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd72edbf6767a4370b293bb0f139568b0d0e20c1ffe3e9bed01f974308d454f1\": container with ID starting with fd72edbf6767a4370b293bb0f139568b0d0e20c1ffe3e9bed01f974308d454f1 not found: ID does not exist" containerID="fd72edbf6767a4370b293bb0f139568b0d0e20c1ffe3e9bed01f974308d454f1" Feb 03 12:42:56 crc kubenswrapper[4820]: I0203 12:42:56.293758 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd72edbf6767a4370b293bb0f139568b0d0e20c1ffe3e9bed01f974308d454f1"} err="failed to get container status \"fd72edbf6767a4370b293bb0f139568b0d0e20c1ffe3e9bed01f974308d454f1\": rpc error: code = NotFound desc = could not find container \"fd72edbf6767a4370b293bb0f139568b0d0e20c1ffe3e9bed01f974308d454f1\": container with ID starting with fd72edbf6767a4370b293bb0f139568b0d0e20c1ffe3e9bed01f974308d454f1 not found: ID does not exist" Feb 03 12:42:57 crc kubenswrapper[4820]: I0203 12:42:57.158489 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e26919-4005-4ccc-b6e0-b93824b5bb3a" path="/var/lib/kubelet/pods/03e26919-4005-4ccc-b6e0-b93824b5bb3a/volumes" Feb 03 12:43:01 crc kubenswrapper[4820]: I0203 12:43:01.366036 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:43:01 crc kubenswrapper[4820]: I0203 12:43:01.366655 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:43:01 crc kubenswrapper[4820]: I0203 12:43:01.366722 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:43:01 crc kubenswrapper[4820]: I0203 12:43:01.367881 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:43:01 crc kubenswrapper[4820]: I0203 12:43:01.367988 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" gracePeriod=600 Feb 03 12:43:01 crc kubenswrapper[4820]: E0203 12:43:01.492827 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:43:02 crc kubenswrapper[4820]: I0203 12:43:02.266665 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" exitCode=0 Feb 03 12:43:02 crc kubenswrapper[4820]: I0203 12:43:02.266710 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77"} Feb 03 12:43:02 crc kubenswrapper[4820]: I0203 12:43:02.266791 4820 scope.go:117] "RemoveContainer" containerID="8f34688920b0d8f1ba8313bfd5660e1745625b1ce2d5d457facdc7ba2bbd910d" Feb 03 12:43:02 crc kubenswrapper[4820]: I0203 12:43:02.267641 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:43:02 crc kubenswrapper[4820]: E0203 12:43:02.267974 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:43:05 crc kubenswrapper[4820]: I0203 12:43:05.123498 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:43:05 crc kubenswrapper[4820]: I0203 12:43:05.174418 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.007938 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6rh9w"] Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.010657 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6rh9w" podUID="0ccef75a-47bb-41de-9476-130d4cc20f53" containerName="registry-server" containerID="cri-o://dacf7ba131127fb5cc893d97e055f0d423d479705e4e39ade4153aaed5340669" gracePeriod=2 Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.490711 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.506667 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ccef75a-47bb-41de-9476-130d4cc20f53-catalog-content\") pod \"0ccef75a-47bb-41de-9476-130d4cc20f53\" (UID: \"0ccef75a-47bb-41de-9476-130d4cc20f53\") " Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.506786 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ccef75a-47bb-41de-9476-130d4cc20f53-utilities\") pod \"0ccef75a-47bb-41de-9476-130d4cc20f53\" (UID: \"0ccef75a-47bb-41de-9476-130d4cc20f53\") " Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.506962 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdd5d\" (UniqueName: \"kubernetes.io/projected/0ccef75a-47bb-41de-9476-130d4cc20f53-kube-api-access-mdd5d\") pod \"0ccef75a-47bb-41de-9476-130d4cc20f53\" (UID: \"0ccef75a-47bb-41de-9476-130d4cc20f53\") " Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.509835 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ccef75a-47bb-41de-9476-130d4cc20f53-utilities" (OuterVolumeSpecName: "utilities") pod "0ccef75a-47bb-41de-9476-130d4cc20f53" (UID: "0ccef75a-47bb-41de-9476-130d4cc20f53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.749927 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ccef75a-47bb-41de-9476-130d4cc20f53-kube-api-access-mdd5d" (OuterVolumeSpecName: "kube-api-access-mdd5d") pod "0ccef75a-47bb-41de-9476-130d4cc20f53" (UID: "0ccef75a-47bb-41de-9476-130d4cc20f53"). InnerVolumeSpecName "kube-api-access-mdd5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.756512 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdd5d\" (UniqueName: \"kubernetes.io/projected/0ccef75a-47bb-41de-9476-130d4cc20f53-kube-api-access-mdd5d\") on node \"crc\" DevicePath \"\"" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.756703 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ccef75a-47bb-41de-9476-130d4cc20f53-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.767571 4820 generic.go:334] "Generic (PLEG): container finished" podID="0ccef75a-47bb-41de-9476-130d4cc20f53" containerID="dacf7ba131127fb5cc893d97e055f0d423d479705e4e39ade4153aaed5340669" exitCode=0 Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.767643 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rh9w" event={"ID":"0ccef75a-47bb-41de-9476-130d4cc20f53","Type":"ContainerDied","Data":"dacf7ba131127fb5cc893d97e055f0d423d479705e4e39ade4153aaed5340669"} Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.767679 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rh9w" event={"ID":"0ccef75a-47bb-41de-9476-130d4cc20f53","Type":"ContainerDied","Data":"5fb9bca521fd66970fbad28269a4197646812fc421bdbdb7e21a3a9d4dcc6b24"} Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.767703 4820 scope.go:117] "RemoveContainer" containerID="dacf7ba131127fb5cc893d97e055f0d423d479705e4e39ade4153aaed5340669" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.767949 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rh9w" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.817779 4820 scope.go:117] "RemoveContainer" containerID="9792b3cfd960048de08b8dd62ec1b8421c7220ccbb2d5ab9711b2e64a032cf62" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.854732 4820 scope.go:117] "RemoveContainer" containerID="ba47d1a722d31eaafad816f29f32d05932d657548b4d7ea353a553ae62a0699c" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.882979 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ccef75a-47bb-41de-9476-130d4cc20f53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ccef75a-47bb-41de-9476-130d4cc20f53" (UID: "0ccef75a-47bb-41de-9476-130d4cc20f53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.899453 4820 scope.go:117] "RemoveContainer" containerID="dacf7ba131127fb5cc893d97e055f0d423d479705e4e39ade4153aaed5340669" Feb 03 12:43:08 crc kubenswrapper[4820]: E0203 12:43:08.900157 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dacf7ba131127fb5cc893d97e055f0d423d479705e4e39ade4153aaed5340669\": container with ID starting with dacf7ba131127fb5cc893d97e055f0d423d479705e4e39ade4153aaed5340669 not found: ID does not exist" containerID="dacf7ba131127fb5cc893d97e055f0d423d479705e4e39ade4153aaed5340669" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.900211 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dacf7ba131127fb5cc893d97e055f0d423d479705e4e39ade4153aaed5340669"} err="failed to get container status \"dacf7ba131127fb5cc893d97e055f0d423d479705e4e39ade4153aaed5340669\": rpc error: code = NotFound desc = could not find container \"dacf7ba131127fb5cc893d97e055f0d423d479705e4e39ade4153aaed5340669\": container with ID starting with dacf7ba131127fb5cc893d97e055f0d423d479705e4e39ade4153aaed5340669 not found: ID does not exist" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.900248 4820 scope.go:117] "RemoveContainer" containerID="9792b3cfd960048de08b8dd62ec1b8421c7220ccbb2d5ab9711b2e64a032cf62" Feb 03 12:43:08 crc kubenswrapper[4820]: E0203 12:43:08.900735 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9792b3cfd960048de08b8dd62ec1b8421c7220ccbb2d5ab9711b2e64a032cf62\": container with ID starting with 9792b3cfd960048de08b8dd62ec1b8421c7220ccbb2d5ab9711b2e64a032cf62 not found: ID does not exist" containerID="9792b3cfd960048de08b8dd62ec1b8421c7220ccbb2d5ab9711b2e64a032cf62" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.900765 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9792b3cfd960048de08b8dd62ec1b8421c7220ccbb2d5ab9711b2e64a032cf62"} err="failed to get container status \"9792b3cfd960048de08b8dd62ec1b8421c7220ccbb2d5ab9711b2e64a032cf62\": rpc error: code = NotFound desc = could not find container \"9792b3cfd960048de08b8dd62ec1b8421c7220ccbb2d5ab9711b2e64a032cf62\": container with ID starting with 9792b3cfd960048de08b8dd62ec1b8421c7220ccbb2d5ab9711b2e64a032cf62 not found: ID does not exist" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.900814 4820 scope.go:117] "RemoveContainer" containerID="ba47d1a722d31eaafad816f29f32d05932d657548b4d7ea353a553ae62a0699c" Feb 03 12:43:08 crc kubenswrapper[4820]: E0203 12:43:08.901350 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba47d1a722d31eaafad816f29f32d05932d657548b4d7ea353a553ae62a0699c\": container with ID starting with ba47d1a722d31eaafad816f29f32d05932d657548b4d7ea353a553ae62a0699c not found: ID does not exist" containerID="ba47d1a722d31eaafad816f29f32d05932d657548b4d7ea353a553ae62a0699c" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.901391 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba47d1a722d31eaafad816f29f32d05932d657548b4d7ea353a553ae62a0699c"} err="failed to get container status \"ba47d1a722d31eaafad816f29f32d05932d657548b4d7ea353a553ae62a0699c\": rpc error: code = NotFound desc = could not find container \"ba47d1a722d31eaafad816f29f32d05932d657548b4d7ea353a553ae62a0699c\": container with ID starting with ba47d1a722d31eaafad816f29f32d05932d657548b4d7ea353a553ae62a0699c not found: ID does not exist" Feb 03 12:43:08 crc kubenswrapper[4820]: I0203 12:43:08.961566 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ccef75a-47bb-41de-9476-130d4cc20f53-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:43:09 crc kubenswrapper[4820]: I0203 12:43:09.104597 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6rh9w"] Feb 03 12:43:09 crc kubenswrapper[4820]: I0203 12:43:09.112812 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6rh9w"] Feb 03 12:43:09 crc kubenswrapper[4820]: I0203 12:43:09.157140 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ccef75a-47bb-41de-9476-130d4cc20f53" path="/var/lib/kubelet/pods/0ccef75a-47bb-41de-9476-130d4cc20f53/volumes" Feb 03 12:43:15 crc kubenswrapper[4820]: I0203 12:43:15.142948 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:43:15 crc kubenswrapper[4820]: E0203 12:43:15.143753 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:43:24 crc kubenswrapper[4820]: I0203 12:43:24.051374 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-thjnl"] Feb 03 12:43:24 crc kubenswrapper[4820]: I0203 12:43:24.061259 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-thjnl"] Feb 03 12:43:25 crc kubenswrapper[4820]: I0203 12:43:25.154861 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="476be9fa-ea08-41c6-b804-37c313076dce" path="/var/lib/kubelet/pods/476be9fa-ea08-41c6-b804-37c313076dce/volumes" Feb 03 12:43:26 crc kubenswrapper[4820]: I0203 12:43:26.059806 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-6c6a-account-create-update-h45fh"] Feb 03 12:43:26 crc kubenswrapper[4820]: I0203 12:43:26.072904 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-7f1d-account-create-update-wjmlt"] Feb 03 12:43:26 crc kubenswrapper[4820]: I0203 12:43:26.088876 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-knmtw"] Feb 03 12:43:26 crc kubenswrapper[4820]: I0203 12:43:26.099010 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-trc87"] Feb 03 12:43:26 crc kubenswrapper[4820]: I0203 12:43:26.109275 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-6c6a-account-create-update-h45fh"] Feb 03 12:43:26 crc kubenswrapper[4820]: I0203 12:43:26.119478 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-knmtw"] Feb 03 12:43:26 crc kubenswrapper[4820]: I0203 12:43:26.132957 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-trc87"] Feb 03 12:43:26 crc kubenswrapper[4820]: I0203 12:43:26.138424 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-7f1d-account-create-update-wjmlt"] Feb 03 12:43:26 crc kubenswrapper[4820]: I0203 12:43:26.143834 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:43:26 crc kubenswrapper[4820]: E0203 12:43:26.144088 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:43:27 crc kubenswrapper[4820]: I0203 12:43:27.039437 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ad72-account-create-update-6ngd9"] Feb 03 12:43:27 crc kubenswrapper[4820]: I0203 12:43:27.051096 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ad72-account-create-update-6ngd9"] Feb 03 12:43:27 crc kubenswrapper[4820]: I0203 12:43:27.155043 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34f8614d-0d83-4dc9-80cb-12e0d2672b13" path="/var/lib/kubelet/pods/34f8614d-0d83-4dc9-80cb-12e0d2672b13/volumes" Feb 03 12:43:27 crc kubenswrapper[4820]: I0203 12:43:27.155708 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="353ec1c9-2e22-4116-b0d7-7d215237a58f" path="/var/lib/kubelet/pods/353ec1c9-2e22-4116-b0d7-7d215237a58f/volumes" Feb 03 12:43:27 crc kubenswrapper[4820]: I0203 12:43:27.156344 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38ac594b-e515-44b4-856f-b57f5f6d5049" path="/var/lib/kubelet/pods/38ac594b-e515-44b4-856f-b57f5f6d5049/volumes" Feb 03 12:43:27 crc kubenswrapper[4820]: I0203 12:43:27.157075 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8d524a9-aabb-4d3a-a443-e4de8a5ababc" path="/var/lib/kubelet/pods/b8d524a9-aabb-4d3a-a443-e4de8a5ababc/volumes" Feb 03 12:43:27 crc kubenswrapper[4820]: I0203 12:43:27.158453 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e43cf7d4-e153-434c-a76e-96e2cc27316e" path="/var/lib/kubelet/pods/e43cf7d4-e153-434c-a76e-96e2cc27316e/volumes" Feb 03 12:43:29 crc kubenswrapper[4820]: I0203 12:43:29.630810 4820 scope.go:117] "RemoveContainer" containerID="982e73684ac883c8859e9043aa66f3a24513cc4cac4c16f08d6b209bc5be7713" Feb 03 12:43:29 crc kubenswrapper[4820]: I0203 12:43:29.659766 4820 scope.go:117] "RemoveContainer" containerID="66aa0e67796be02e8d215b4c5293be1484e4dc2d58ee0866551f95ee84156d46" Feb 03 12:43:29 crc kubenswrapper[4820]: I0203 12:43:29.720846 4820 scope.go:117] "RemoveContainer" containerID="6be7235e46990b33d87d4a223f1f2835db0e767cc5efd18fdf5e16c86908905c" Feb 03 12:43:29 crc kubenswrapper[4820]: I0203 12:43:29.778778 4820 scope.go:117] "RemoveContainer" containerID="dc0725989584c1def9cf3a5c11d2f816571c066143ababc3fe22badb7117401e" Feb 03 12:43:29 crc kubenswrapper[4820]: I0203 12:43:29.826380 4820 scope.go:117] "RemoveContainer" containerID="48992d5877b5656608d8f408a9e84644b6e3478f3e022688db1fd710cf9340e3" Feb 03 12:43:29 crc kubenswrapper[4820]: I0203 12:43:29.937784 4820 scope.go:117] "RemoveContainer" containerID="ac4dedfd5518233f049989a6c43d31fa77608ad03f38d183d70a2195be319f0c" Feb 03 12:43:37 crc kubenswrapper[4820]: I0203 12:43:37.145320 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:43:37 crc kubenswrapper[4820]: E0203 12:43:37.146123 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:43:52 crc kubenswrapper[4820]: I0203 12:43:52.143953 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:43:52 crc kubenswrapper[4820]: E0203 12:43:52.146151 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.141869 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ggt64"] Feb 03 12:43:58 crc kubenswrapper[4820]: E0203 12:43:58.143025 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ccef75a-47bb-41de-9476-130d4cc20f53" containerName="extract-content" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.143060 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ccef75a-47bb-41de-9476-130d4cc20f53" containerName="extract-content" Feb 03 12:43:58 crc kubenswrapper[4820]: E0203 12:43:58.143077 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e26919-4005-4ccc-b6e0-b93824b5bb3a" containerName="extract-utilities" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.143086 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e26919-4005-4ccc-b6e0-b93824b5bb3a" containerName="extract-utilities" Feb 03 12:43:58 crc kubenswrapper[4820]: E0203 12:43:58.143107 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ccef75a-47bb-41de-9476-130d4cc20f53" containerName="extract-utilities" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.143115 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ccef75a-47bb-41de-9476-130d4cc20f53" containerName="extract-utilities" Feb 03 12:43:58 crc kubenswrapper[4820]: E0203 12:43:58.143135 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e26919-4005-4ccc-b6e0-b93824b5bb3a" containerName="registry-server" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.143143 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e26919-4005-4ccc-b6e0-b93824b5bb3a" containerName="registry-server" Feb 03 12:43:58 crc kubenswrapper[4820]: E0203 12:43:58.143157 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ccef75a-47bb-41de-9476-130d4cc20f53" containerName="registry-server" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.143165 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ccef75a-47bb-41de-9476-130d4cc20f53" containerName="registry-server" Feb 03 12:43:58 crc kubenswrapper[4820]: E0203 12:43:58.143199 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e26919-4005-4ccc-b6e0-b93824b5bb3a" containerName="extract-content" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.143208 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e26919-4005-4ccc-b6e0-b93824b5bb3a" containerName="extract-content" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.143469 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e26919-4005-4ccc-b6e0-b93824b5bb3a" containerName="registry-server" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.143494 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ccef75a-47bb-41de-9476-130d4cc20f53" containerName="registry-server" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.145217 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.154194 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ggt64"] Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.170847 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9aae7a2-1328-46d4-b599-fca3f15e85f0-utilities\") pod \"certified-operators-ggt64\" (UID: \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\") " pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.170926 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9aae7a2-1328-46d4-b599-fca3f15e85f0-catalog-content\") pod \"certified-operators-ggt64\" (UID: \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\") " pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.171094 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n548\" (UniqueName: \"kubernetes.io/projected/e9aae7a2-1328-46d4-b599-fca3f15e85f0-kube-api-access-2n548\") pod \"certified-operators-ggt64\" (UID: \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\") " pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.272927 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9aae7a2-1328-46d4-b599-fca3f15e85f0-utilities\") pod \"certified-operators-ggt64\" (UID: \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\") " pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.272983 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9aae7a2-1328-46d4-b599-fca3f15e85f0-catalog-content\") pod \"certified-operators-ggt64\" (UID: \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\") " pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.273081 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n548\" (UniqueName: \"kubernetes.io/projected/e9aae7a2-1328-46d4-b599-fca3f15e85f0-kube-api-access-2n548\") pod \"certified-operators-ggt64\" (UID: \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\") " pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.273567 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9aae7a2-1328-46d4-b599-fca3f15e85f0-utilities\") pod \"certified-operators-ggt64\" (UID: \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\") " pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.273782 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9aae7a2-1328-46d4-b599-fca3f15e85f0-catalog-content\") pod \"certified-operators-ggt64\" (UID: \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\") " pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.307706 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n548\" (UniqueName: \"kubernetes.io/projected/e9aae7a2-1328-46d4-b599-fca3f15e85f0-kube-api-access-2n548\") pod \"certified-operators-ggt64\" (UID: \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\") " pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:43:58 crc kubenswrapper[4820]: I0203 12:43:58.473800 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:43:59 crc kubenswrapper[4820]: I0203 12:43:59.293160 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ggt64"] Feb 03 12:43:59 crc kubenswrapper[4820]: I0203 12:43:59.475488 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggt64" event={"ID":"e9aae7a2-1328-46d4-b599-fca3f15e85f0","Type":"ContainerStarted","Data":"fbf0d0c9704933cef9de49256178a185ae30e83dc7d4e9c8c5b8affe7816695c"} Feb 03 12:44:00 crc kubenswrapper[4820]: I0203 12:44:00.486386 4820 generic.go:334] "Generic (PLEG): container finished" podID="e9aae7a2-1328-46d4-b599-fca3f15e85f0" containerID="2bf1b055465ea5c7212e39fc1631a88de7304e0e0da6edffc155265aea42b513" exitCode=0 Feb 03 12:44:00 crc kubenswrapper[4820]: I0203 12:44:00.486602 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggt64" event={"ID":"e9aae7a2-1328-46d4-b599-fca3f15e85f0","Type":"ContainerDied","Data":"2bf1b055465ea5c7212e39fc1631a88de7304e0e0da6edffc155265aea42b513"} Feb 03 12:44:01 crc kubenswrapper[4820]: I0203 12:44:01.499334 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggt64" event={"ID":"e9aae7a2-1328-46d4-b599-fca3f15e85f0","Type":"ContainerStarted","Data":"f4a442054002d262b5d93f5578cf7d3a07f56a6d0d7cb3d34349f27a3547be36"} Feb 03 12:44:03 crc kubenswrapper[4820]: I0203 12:44:03.532288 4820 generic.go:334] "Generic (PLEG): container finished" podID="e9aae7a2-1328-46d4-b599-fca3f15e85f0" containerID="f4a442054002d262b5d93f5578cf7d3a07f56a6d0d7cb3d34349f27a3547be36" exitCode=0 Feb 03 12:44:03 crc kubenswrapper[4820]: I0203 12:44:03.532589 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggt64" event={"ID":"e9aae7a2-1328-46d4-b599-fca3f15e85f0","Type":"ContainerDied","Data":"f4a442054002d262b5d93f5578cf7d3a07f56a6d0d7cb3d34349f27a3547be36"} Feb 03 12:44:04 crc kubenswrapper[4820]: I0203 12:44:04.654365 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggt64" event={"ID":"e9aae7a2-1328-46d4-b599-fca3f15e85f0","Type":"ContainerStarted","Data":"4da91f52ea2da30b829b069cd51e4d164a0756fb55832d727da3c212683c3823"} Feb 03 12:44:04 crc kubenswrapper[4820]: I0203 12:44:04.674629 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ggt64" podStartSLOduration=2.973900383 podStartE2EDuration="6.674605952s" podCreationTimestamp="2026-02-03 12:43:58 +0000 UTC" firstStartedPulling="2026-02-03 12:44:00.488491573 +0000 UTC m=+2358.011567437" lastFinishedPulling="2026-02-03 12:44:04.189197142 +0000 UTC m=+2361.712273006" observedRunningTime="2026-02-03 12:44:04.674081687 +0000 UTC m=+2362.197157571" watchObservedRunningTime="2026-02-03 12:44:04.674605952 +0000 UTC m=+2362.197681816" Feb 03 12:44:06 crc kubenswrapper[4820]: I0203 12:44:06.143098 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:44:06 crc kubenswrapper[4820]: E0203 12:44:06.143734 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:44:08 crc kubenswrapper[4820]: I0203 12:44:08.475716 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:44:08 crc kubenswrapper[4820]: I0203 12:44:08.476052 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:44:08 crc kubenswrapper[4820]: I0203 12:44:08.528424 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:44:10 crc kubenswrapper[4820]: I0203 12:44:10.896254 4820 generic.go:334] "Generic (PLEG): container finished" podID="c7b75829-d001-4e04-9850-44e986677f48" containerID="45f8aee4cf498534c4e0f242f562ba87435d11ef6a16513da50789336ac6f8e0" exitCode=0 Feb 03 12:44:10 crc kubenswrapper[4820]: I0203 12:44:10.896426 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" event={"ID":"c7b75829-d001-4e04-9850-44e986677f48","Type":"ContainerDied","Data":"45f8aee4cf498534c4e0f242f562ba87435d11ef6a16513da50789336ac6f8e0"} Feb 03 12:44:12 crc kubenswrapper[4820]: I0203 12:44:12.753137 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" Feb 03 12:44:12 crc kubenswrapper[4820]: I0203 12:44:12.891157 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7b75829-d001-4e04-9850-44e986677f48-inventory\") pod \"c7b75829-d001-4e04-9850-44e986677f48\" (UID: \"c7b75829-d001-4e04-9850-44e986677f48\") " Feb 03 12:44:12 crc kubenswrapper[4820]: I0203 12:44:12.891259 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpk7c\" (UniqueName: \"kubernetes.io/projected/c7b75829-d001-4e04-9850-44e986677f48-kube-api-access-dpk7c\") pod \"c7b75829-d001-4e04-9850-44e986677f48\" (UID: \"c7b75829-d001-4e04-9850-44e986677f48\") " Feb 03 12:44:12 crc kubenswrapper[4820]: I0203 12:44:12.891440 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7b75829-d001-4e04-9850-44e986677f48-ssh-key-openstack-edpm-ipam\") pod \"c7b75829-d001-4e04-9850-44e986677f48\" (UID: \"c7b75829-d001-4e04-9850-44e986677f48\") " Feb 03 12:44:12 crc kubenswrapper[4820]: I0203 12:44:12.898362 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b75829-d001-4e04-9850-44e986677f48-kube-api-access-dpk7c" (OuterVolumeSpecName: "kube-api-access-dpk7c") pod "c7b75829-d001-4e04-9850-44e986677f48" (UID: "c7b75829-d001-4e04-9850-44e986677f48"). InnerVolumeSpecName "kube-api-access-dpk7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:44:12 crc kubenswrapper[4820]: I0203 12:44:12.921418 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" event={"ID":"c7b75829-d001-4e04-9850-44e986677f48","Type":"ContainerDied","Data":"73ee6548b5eba8f61263379a332b2be1e24a5b9225cdee813887ab46c2e63019"} Feb 03 12:44:12 crc kubenswrapper[4820]: I0203 12:44:12.921460 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73ee6548b5eba8f61263379a332b2be1e24a5b9225cdee813887ab46c2e63019" Feb 03 12:44:12 crc kubenswrapper[4820]: I0203 12:44:12.921493 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7" Feb 03 12:44:12 crc kubenswrapper[4820]: I0203 12:44:12.923182 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b75829-d001-4e04-9850-44e986677f48-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c7b75829-d001-4e04-9850-44e986677f48" (UID: "c7b75829-d001-4e04-9850-44e986677f48"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:44:12 crc kubenswrapper[4820]: I0203 12:44:12.930808 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b75829-d001-4e04-9850-44e986677f48-inventory" (OuterVolumeSpecName: "inventory") pod "c7b75829-d001-4e04-9850-44e986677f48" (UID: "c7b75829-d001-4e04-9850-44e986677f48"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:44:12 crc kubenswrapper[4820]: I0203 12:44:12.993972 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7b75829-d001-4e04-9850-44e986677f48-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:44:12 crc kubenswrapper[4820]: I0203 12:44:12.994268 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7b75829-d001-4e04-9850-44e986677f48-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:44:12 crc kubenswrapper[4820]: I0203 12:44:12.994282 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpk7c\" (UniqueName: \"kubernetes.io/projected/c7b75829-d001-4e04-9850-44e986677f48-kube-api-access-dpk7c\") on node \"crc\" DevicePath \"\"" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.029704 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw"] Feb 03 12:44:13 crc kubenswrapper[4820]: E0203 12:44:13.030254 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b75829-d001-4e04-9850-44e986677f48" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.030270 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b75829-d001-4e04-9850-44e986677f48" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.030475 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b75829-d001-4e04-9850-44e986677f48" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.031954 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.044824 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw"] Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.197916 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dczvq\" (UniqueName: \"kubernetes.io/projected/fc5454df-b4c1-45f5-9021-a70a13b47b37-kube-api-access-dczvq\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw\" (UID: \"fc5454df-b4c1-45f5-9021-a70a13b47b37\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.197973 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5454df-b4c1-45f5-9021-a70a13b47b37-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw\" (UID: \"fc5454df-b4c1-45f5-9021-a70a13b47b37\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.198003 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5454df-b4c1-45f5-9021-a70a13b47b37-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw\" (UID: \"fc5454df-b4c1-45f5-9021-a70a13b47b37\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.301150 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dczvq\" (UniqueName: \"kubernetes.io/projected/fc5454df-b4c1-45f5-9021-a70a13b47b37-kube-api-access-dczvq\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw\" (UID: \"fc5454df-b4c1-45f5-9021-a70a13b47b37\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.301229 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5454df-b4c1-45f5-9021-a70a13b47b37-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw\" (UID: \"fc5454df-b4c1-45f5-9021-a70a13b47b37\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.301264 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5454df-b4c1-45f5-9021-a70a13b47b37-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw\" (UID: \"fc5454df-b4c1-45f5-9021-a70a13b47b37\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.305512 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5454df-b4c1-45f5-9021-a70a13b47b37-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw\" (UID: \"fc5454df-b4c1-45f5-9021-a70a13b47b37\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.305783 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5454df-b4c1-45f5-9021-a70a13b47b37-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw\" (UID: \"fc5454df-b4c1-45f5-9021-a70a13b47b37\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.325515 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dczvq\" (UniqueName: \"kubernetes.io/projected/fc5454df-b4c1-45f5-9021-a70a13b47b37-kube-api-access-dczvq\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw\" (UID: \"fc5454df-b4c1-45f5-9021-a70a13b47b37\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.404455 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" Feb 03 12:44:13 crc kubenswrapper[4820]: I0203 12:44:13.962487 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw"] Feb 03 12:44:14 crc kubenswrapper[4820]: I0203 12:44:14.944234 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" event={"ID":"fc5454df-b4c1-45f5-9021-a70a13b47b37","Type":"ContainerStarted","Data":"4b8f16a03082e10346c8f1cd2f3bbea580d47a9afc8728bd3cf66725d248abd0"} Feb 03 12:44:14 crc kubenswrapper[4820]: I0203 12:44:14.944963 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" event={"ID":"fc5454df-b4c1-45f5-9021-a70a13b47b37","Type":"ContainerStarted","Data":"e73368956e9d782635e7818d624ef1c83bdabb683241584226f4b71bb28a4434"} Feb 03 12:44:14 crc kubenswrapper[4820]: I0203 12:44:14.971117 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" podStartSLOduration=1.52428637 podStartE2EDuration="1.97109313s" podCreationTimestamp="2026-02-03 12:44:13 +0000 UTC" firstStartedPulling="2026-02-03 12:44:13.958967419 +0000 UTC m=+2371.482043303" lastFinishedPulling="2026-02-03 12:44:14.405774199 +0000 UTC m=+2371.928850063" observedRunningTime="2026-02-03 12:44:14.965673884 +0000 UTC m=+2372.488749758" watchObservedRunningTime="2026-02-03 12:44:14.97109313 +0000 UTC m=+2372.494169004" Feb 03 12:44:18 crc kubenswrapper[4820]: I0203 12:44:18.523979 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:44:18 crc kubenswrapper[4820]: I0203 12:44:18.586824 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ggt64"] Feb 03 12:44:19 crc kubenswrapper[4820]: I0203 12:44:19.327200 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:44:19 crc kubenswrapper[4820]: E0203 12:44:19.327521 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:44:19 crc kubenswrapper[4820]: I0203 12:44:19.342042 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ggt64" podUID="e9aae7a2-1328-46d4-b599-fca3f15e85f0" containerName="registry-server" containerID="cri-o://4da91f52ea2da30b829b069cd51e4d164a0756fb55832d727da3c212683c3823" gracePeriod=2 Feb 03 12:44:19 crc kubenswrapper[4820]: I0203 12:44:19.793414 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:44:19 crc kubenswrapper[4820]: I0203 12:44:19.940448 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9aae7a2-1328-46d4-b599-fca3f15e85f0-catalog-content\") pod \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\" (UID: \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\") " Feb 03 12:44:19 crc kubenswrapper[4820]: I0203 12:44:19.940575 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n548\" (UniqueName: \"kubernetes.io/projected/e9aae7a2-1328-46d4-b599-fca3f15e85f0-kube-api-access-2n548\") pod \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\" (UID: \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\") " Feb 03 12:44:19 crc kubenswrapper[4820]: I0203 12:44:19.940636 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9aae7a2-1328-46d4-b599-fca3f15e85f0-utilities\") pod \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\" (UID: \"e9aae7a2-1328-46d4-b599-fca3f15e85f0\") " Feb 03 12:44:19 crc kubenswrapper[4820]: I0203 12:44:19.941840 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9aae7a2-1328-46d4-b599-fca3f15e85f0-utilities" (OuterVolumeSpecName: "utilities") pod "e9aae7a2-1328-46d4-b599-fca3f15e85f0" (UID: "e9aae7a2-1328-46d4-b599-fca3f15e85f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:44:19 crc kubenswrapper[4820]: I0203 12:44:19.946535 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9aae7a2-1328-46d4-b599-fca3f15e85f0-kube-api-access-2n548" (OuterVolumeSpecName: "kube-api-access-2n548") pod "e9aae7a2-1328-46d4-b599-fca3f15e85f0" (UID: "e9aae7a2-1328-46d4-b599-fca3f15e85f0"). InnerVolumeSpecName "kube-api-access-2n548". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.006637 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9aae7a2-1328-46d4-b599-fca3f15e85f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9aae7a2-1328-46d4-b599-fca3f15e85f0" (UID: "e9aae7a2-1328-46d4-b599-fca3f15e85f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.042980 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9aae7a2-1328-46d4-b599-fca3f15e85f0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.043024 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n548\" (UniqueName: \"kubernetes.io/projected/e9aae7a2-1328-46d4-b599-fca3f15e85f0-kube-api-access-2n548\") on node \"crc\" DevicePath \"\"" Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.043039 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9aae7a2-1328-46d4-b599-fca3f15e85f0-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.361337 4820 generic.go:334] "Generic (PLEG): container finished" podID="e9aae7a2-1328-46d4-b599-fca3f15e85f0" containerID="4da91f52ea2da30b829b069cd51e4d164a0756fb55832d727da3c212683c3823" exitCode=0 Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.361404 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggt64" event={"ID":"e9aae7a2-1328-46d4-b599-fca3f15e85f0","Type":"ContainerDied","Data":"4da91f52ea2da30b829b069cd51e4d164a0756fb55832d727da3c212683c3823"} Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.361445 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggt64" event={"ID":"e9aae7a2-1328-46d4-b599-fca3f15e85f0","Type":"ContainerDied","Data":"fbf0d0c9704933cef9de49256178a185ae30e83dc7d4e9c8c5b8affe7816695c"} Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.361483 4820 scope.go:117] "RemoveContainer" containerID="4da91f52ea2da30b829b069cd51e4d164a0756fb55832d727da3c212683c3823" Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.361719 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggt64" Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.387197 4820 scope.go:117] "RemoveContainer" containerID="f4a442054002d262b5d93f5578cf7d3a07f56a6d0d7cb3d34349f27a3547be36" Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.406249 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ggt64"] Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.413355 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ggt64"] Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.439329 4820 scope.go:117] "RemoveContainer" containerID="2bf1b055465ea5c7212e39fc1631a88de7304e0e0da6edffc155265aea42b513" Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.475466 4820 scope.go:117] "RemoveContainer" containerID="4da91f52ea2da30b829b069cd51e4d164a0756fb55832d727da3c212683c3823" Feb 03 12:44:20 crc kubenswrapper[4820]: E0203 12:44:20.476209 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4da91f52ea2da30b829b069cd51e4d164a0756fb55832d727da3c212683c3823\": container with ID starting with 4da91f52ea2da30b829b069cd51e4d164a0756fb55832d727da3c212683c3823 not found: ID does not exist" containerID="4da91f52ea2da30b829b069cd51e4d164a0756fb55832d727da3c212683c3823" Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.476370 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4da91f52ea2da30b829b069cd51e4d164a0756fb55832d727da3c212683c3823"} err="failed to get container status \"4da91f52ea2da30b829b069cd51e4d164a0756fb55832d727da3c212683c3823\": rpc error: code = NotFound desc = could not find container \"4da91f52ea2da30b829b069cd51e4d164a0756fb55832d727da3c212683c3823\": container with ID starting with 4da91f52ea2da30b829b069cd51e4d164a0756fb55832d727da3c212683c3823 not found: ID does not exist" Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.476540 4820 scope.go:117] "RemoveContainer" containerID="f4a442054002d262b5d93f5578cf7d3a07f56a6d0d7cb3d34349f27a3547be36" Feb 03 12:44:20 crc kubenswrapper[4820]: E0203 12:44:20.477194 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4a442054002d262b5d93f5578cf7d3a07f56a6d0d7cb3d34349f27a3547be36\": container with ID starting with f4a442054002d262b5d93f5578cf7d3a07f56a6d0d7cb3d34349f27a3547be36 not found: ID does not exist" containerID="f4a442054002d262b5d93f5578cf7d3a07f56a6d0d7cb3d34349f27a3547be36" Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.477241 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a442054002d262b5d93f5578cf7d3a07f56a6d0d7cb3d34349f27a3547be36"} err="failed to get container status \"f4a442054002d262b5d93f5578cf7d3a07f56a6d0d7cb3d34349f27a3547be36\": rpc error: code = NotFound desc = could not find container \"f4a442054002d262b5d93f5578cf7d3a07f56a6d0d7cb3d34349f27a3547be36\": container with ID starting with f4a442054002d262b5d93f5578cf7d3a07f56a6d0d7cb3d34349f27a3547be36 not found: ID does not exist" Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.477271 4820 scope.go:117] "RemoveContainer" containerID="2bf1b055465ea5c7212e39fc1631a88de7304e0e0da6edffc155265aea42b513" Feb 03 12:44:20 crc kubenswrapper[4820]: E0203 12:44:20.477583 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf1b055465ea5c7212e39fc1631a88de7304e0e0da6edffc155265aea42b513\": container with ID starting with 2bf1b055465ea5c7212e39fc1631a88de7304e0e0da6edffc155265aea42b513 not found: ID does not exist" containerID="2bf1b055465ea5c7212e39fc1631a88de7304e0e0da6edffc155265aea42b513" Feb 03 12:44:20 crc kubenswrapper[4820]: I0203 12:44:20.477609 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf1b055465ea5c7212e39fc1631a88de7304e0e0da6edffc155265aea42b513"} err="failed to get container status \"2bf1b055465ea5c7212e39fc1631a88de7304e0e0da6edffc155265aea42b513\": rpc error: code = NotFound desc = could not find container \"2bf1b055465ea5c7212e39fc1631a88de7304e0e0da6edffc155265aea42b513\": container with ID starting with 2bf1b055465ea5c7212e39fc1631a88de7304e0e0da6edffc155265aea42b513 not found: ID does not exist" Feb 03 12:44:21 crc kubenswrapper[4820]: I0203 12:44:21.322373 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9aae7a2-1328-46d4-b599-fca3f15e85f0" path="/var/lib/kubelet/pods/e9aae7a2-1328-46d4-b599-fca3f15e85f0/volumes" Feb 03 12:44:34 crc kubenswrapper[4820]: I0203 12:44:34.143274 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:44:34 crc kubenswrapper[4820]: E0203 12:44:34.144090 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:44:40 crc kubenswrapper[4820]: I0203 12:44:40.053663 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-46nxk"] Feb 03 12:44:40 crc kubenswrapper[4820]: I0203 12:44:40.065527 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-46nxk"] Feb 03 12:44:41 crc kubenswrapper[4820]: I0203 12:44:41.159646 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc" path="/var/lib/kubelet/pods/d4619454-1fe5-4b0c-8fae-1ffc6b92cfbc/volumes" Feb 03 12:44:47 crc kubenswrapper[4820]: I0203 12:44:47.143390 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:44:47 crc kubenswrapper[4820]: E0203 12:44:47.144189 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:44:58 crc kubenswrapper[4820]: I0203 12:44:58.143724 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:44:58 crc kubenswrapper[4820]: E0203 12:44:58.144498 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.171447 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm"] Feb 03 12:45:00 crc kubenswrapper[4820]: E0203 12:45:00.172296 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9aae7a2-1328-46d4-b599-fca3f15e85f0" containerName="extract-content" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.172326 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9aae7a2-1328-46d4-b599-fca3f15e85f0" containerName="extract-content" Feb 03 12:45:00 crc kubenswrapper[4820]: E0203 12:45:00.172349 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9aae7a2-1328-46d4-b599-fca3f15e85f0" containerName="registry-server" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.172357 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9aae7a2-1328-46d4-b599-fca3f15e85f0" containerName="registry-server" Feb 03 12:45:00 crc kubenswrapper[4820]: E0203 12:45:00.172378 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9aae7a2-1328-46d4-b599-fca3f15e85f0" containerName="extract-utilities" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.172386 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9aae7a2-1328-46d4-b599-fca3f15e85f0" containerName="extract-utilities" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.172771 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9aae7a2-1328-46d4-b599-fca3f15e85f0" containerName="registry-server" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.190346 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm"] Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.190491 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.193466 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.193960 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.371688 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08459177-65bc-4cf2-850b-3d8db214d191-config-volume\") pod \"collect-profiles-29502045-tgglm\" (UID: \"08459177-65bc-4cf2-850b-3d8db214d191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.372435 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08459177-65bc-4cf2-850b-3d8db214d191-secret-volume\") pod \"collect-profiles-29502045-tgglm\" (UID: \"08459177-65bc-4cf2-850b-3d8db214d191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.372578 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zss2n\" (UniqueName: \"kubernetes.io/projected/08459177-65bc-4cf2-850b-3d8db214d191-kube-api-access-zss2n\") pod \"collect-profiles-29502045-tgglm\" (UID: \"08459177-65bc-4cf2-850b-3d8db214d191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.474577 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08459177-65bc-4cf2-850b-3d8db214d191-config-volume\") pod \"collect-profiles-29502045-tgglm\" (UID: \"08459177-65bc-4cf2-850b-3d8db214d191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.474768 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08459177-65bc-4cf2-850b-3d8db214d191-secret-volume\") pod \"collect-profiles-29502045-tgglm\" (UID: \"08459177-65bc-4cf2-850b-3d8db214d191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.474807 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zss2n\" (UniqueName: \"kubernetes.io/projected/08459177-65bc-4cf2-850b-3d8db214d191-kube-api-access-zss2n\") pod \"collect-profiles-29502045-tgglm\" (UID: \"08459177-65bc-4cf2-850b-3d8db214d191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.475811 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08459177-65bc-4cf2-850b-3d8db214d191-config-volume\") pod \"collect-profiles-29502045-tgglm\" (UID: \"08459177-65bc-4cf2-850b-3d8db214d191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.480983 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08459177-65bc-4cf2-850b-3d8db214d191-secret-volume\") pod \"collect-profiles-29502045-tgglm\" (UID: \"08459177-65bc-4cf2-850b-3d8db214d191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.491931 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zss2n\" (UniqueName: \"kubernetes.io/projected/08459177-65bc-4cf2-850b-3d8db214d191-kube-api-access-zss2n\") pod \"collect-profiles-29502045-tgglm\" (UID: \"08459177-65bc-4cf2-850b-3d8db214d191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" Feb 03 12:45:00 crc kubenswrapper[4820]: I0203 12:45:00.519985 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" Feb 03 12:45:01 crc kubenswrapper[4820]: I0203 12:45:01.014029 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm"] Feb 03 12:45:01 crc kubenswrapper[4820]: I0203 12:45:01.084921 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" event={"ID":"08459177-65bc-4cf2-850b-3d8db214d191","Type":"ContainerStarted","Data":"2df6bc586a9f2c52212fcb162e768253a06c86ecc309829ceaaed2a11a93acbe"} Feb 03 12:45:02 crc kubenswrapper[4820]: I0203 12:45:02.094994 4820 generic.go:334] "Generic (PLEG): container finished" podID="08459177-65bc-4cf2-850b-3d8db214d191" containerID="c2fad438c4736c8b6f67398598140c6ea893222685c54fe567bc3793d381c751" exitCode=0 Feb 03 12:45:02 crc kubenswrapper[4820]: I0203 12:45:02.095086 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" event={"ID":"08459177-65bc-4cf2-850b-3d8db214d191","Type":"ContainerDied","Data":"c2fad438c4736c8b6f67398598140c6ea893222685c54fe567bc3793d381c751"} Feb 03 12:45:03 crc kubenswrapper[4820]: I0203 12:45:03.474778 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" Feb 03 12:45:03 crc kubenswrapper[4820]: I0203 12:45:03.598534 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08459177-65bc-4cf2-850b-3d8db214d191-config-volume\") pod \"08459177-65bc-4cf2-850b-3d8db214d191\" (UID: \"08459177-65bc-4cf2-850b-3d8db214d191\") " Feb 03 12:45:03 crc kubenswrapper[4820]: I0203 12:45:03.598725 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zss2n\" (UniqueName: \"kubernetes.io/projected/08459177-65bc-4cf2-850b-3d8db214d191-kube-api-access-zss2n\") pod \"08459177-65bc-4cf2-850b-3d8db214d191\" (UID: \"08459177-65bc-4cf2-850b-3d8db214d191\") " Feb 03 12:45:03 crc kubenswrapper[4820]: I0203 12:45:03.598779 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08459177-65bc-4cf2-850b-3d8db214d191-secret-volume\") pod \"08459177-65bc-4cf2-850b-3d8db214d191\" (UID: \"08459177-65bc-4cf2-850b-3d8db214d191\") " Feb 03 12:45:03 crc kubenswrapper[4820]: I0203 12:45:03.601421 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08459177-65bc-4cf2-850b-3d8db214d191-config-volume" (OuterVolumeSpecName: "config-volume") pod "08459177-65bc-4cf2-850b-3d8db214d191" (UID: "08459177-65bc-4cf2-850b-3d8db214d191"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:45:03 crc kubenswrapper[4820]: I0203 12:45:03.609915 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08459177-65bc-4cf2-850b-3d8db214d191-kube-api-access-zss2n" (OuterVolumeSpecName: "kube-api-access-zss2n") pod "08459177-65bc-4cf2-850b-3d8db214d191" (UID: "08459177-65bc-4cf2-850b-3d8db214d191"). InnerVolumeSpecName "kube-api-access-zss2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:45:03 crc kubenswrapper[4820]: I0203 12:45:03.609923 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08459177-65bc-4cf2-850b-3d8db214d191-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "08459177-65bc-4cf2-850b-3d8db214d191" (UID: "08459177-65bc-4cf2-850b-3d8db214d191"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:45:03 crc kubenswrapper[4820]: I0203 12:45:03.700872 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08459177-65bc-4cf2-850b-3d8db214d191-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:03 crc kubenswrapper[4820]: I0203 12:45:03.700931 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zss2n\" (UniqueName: \"kubernetes.io/projected/08459177-65bc-4cf2-850b-3d8db214d191-kube-api-access-zss2n\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:03 crc kubenswrapper[4820]: I0203 12:45:03.700947 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08459177-65bc-4cf2-850b-3d8db214d191-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:04 crc kubenswrapper[4820]: I0203 12:45:04.203213 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" event={"ID":"08459177-65bc-4cf2-850b-3d8db214d191","Type":"ContainerDied","Data":"2df6bc586a9f2c52212fcb162e768253a06c86ecc309829ceaaed2a11a93acbe"} Feb 03 12:45:04 crc kubenswrapper[4820]: I0203 12:45:04.203271 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm" Feb 03 12:45:04 crc kubenswrapper[4820]: I0203 12:45:04.203290 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2df6bc586a9f2c52212fcb162e768253a06c86ecc309829ceaaed2a11a93acbe" Feb 03 12:45:04 crc kubenswrapper[4820]: I0203 12:45:04.553320 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr"] Feb 03 12:45:04 crc kubenswrapper[4820]: I0203 12:45:04.563396 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502000-hcscr"] Feb 03 12:45:05 crc kubenswrapper[4820]: I0203 12:45:05.174126 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9d628ea-493d-4b0c-b4a2-194cef62a08e" path="/var/lib/kubelet/pods/b9d628ea-493d-4b0c-b4a2-194cef62a08e/volumes" Feb 03 12:45:12 crc kubenswrapper[4820]: I0203 12:45:12.147452 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:45:12 crc kubenswrapper[4820]: E0203 12:45:12.149078 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:45:27 crc kubenswrapper[4820]: I0203 12:45:27.143117 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:45:27 crc kubenswrapper[4820]: E0203 12:45:27.143865 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:45:28 crc kubenswrapper[4820]: I0203 12:45:28.219608 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qmdlk"] Feb 03 12:45:28 crc kubenswrapper[4820]: I0203 12:45:28.226787 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qmdlk"] Feb 03 12:45:28 crc kubenswrapper[4820]: I0203 12:45:28.436502 4820 generic.go:334] "Generic (PLEG): container finished" podID="fc5454df-b4c1-45f5-9021-a70a13b47b37" containerID="4b8f16a03082e10346c8f1cd2f3bbea580d47a9afc8728bd3cf66725d248abd0" exitCode=0 Feb 03 12:45:28 crc kubenswrapper[4820]: I0203 12:45:28.436563 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" event={"ID":"fc5454df-b4c1-45f5-9021-a70a13b47b37","Type":"ContainerDied","Data":"4b8f16a03082e10346c8f1cd2f3bbea580d47a9afc8728bd3cf66725d248abd0"} Feb 03 12:45:29 crc kubenswrapper[4820]: I0203 12:45:29.155300 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96e51574-4c0f-449e-99c9-f71651ddf08e" path="/var/lib/kubelet/pods/96e51574-4c0f-449e-99c9-f71651ddf08e/volumes" Feb 03 12:45:29 crc kubenswrapper[4820]: I0203 12:45:29.943571 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.110151 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5454df-b4c1-45f5-9021-a70a13b47b37-inventory\") pod \"fc5454df-b4c1-45f5-9021-a70a13b47b37\" (UID: \"fc5454df-b4c1-45f5-9021-a70a13b47b37\") " Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.110216 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5454df-b4c1-45f5-9021-a70a13b47b37-ssh-key-openstack-edpm-ipam\") pod \"fc5454df-b4c1-45f5-9021-a70a13b47b37\" (UID: \"fc5454df-b4c1-45f5-9021-a70a13b47b37\") " Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.110239 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dczvq\" (UniqueName: \"kubernetes.io/projected/fc5454df-b4c1-45f5-9021-a70a13b47b37-kube-api-access-dczvq\") pod \"fc5454df-b4c1-45f5-9021-a70a13b47b37\" (UID: \"fc5454df-b4c1-45f5-9021-a70a13b47b37\") " Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.125844 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5454df-b4c1-45f5-9021-a70a13b47b37-kube-api-access-dczvq" (OuterVolumeSpecName: "kube-api-access-dczvq") pod "fc5454df-b4c1-45f5-9021-a70a13b47b37" (UID: "fc5454df-b4c1-45f5-9021-a70a13b47b37"). InnerVolumeSpecName "kube-api-access-dczvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.148425 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5454df-b4c1-45f5-9021-a70a13b47b37-inventory" (OuterVolumeSpecName: "inventory") pod "fc5454df-b4c1-45f5-9021-a70a13b47b37" (UID: "fc5454df-b4c1-45f5-9021-a70a13b47b37"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.171871 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5454df-b4c1-45f5-9021-a70a13b47b37-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fc5454df-b4c1-45f5-9021-a70a13b47b37" (UID: "fc5454df-b4c1-45f5-9021-a70a13b47b37"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.213416 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5454df-b4c1-45f5-9021-a70a13b47b37-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.213453 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5454df-b4c1-45f5-9021-a70a13b47b37-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.213469 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dczvq\" (UniqueName: \"kubernetes.io/projected/fc5454df-b4c1-45f5-9021-a70a13b47b37-kube-api-access-dczvq\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.221059 4820 scope.go:117] "RemoveContainer" containerID="5ac08a8a154c895b89f3ef82fe7d81c2f3220d2db7f95ad75058c6645d9c383f" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.282481 4820 scope.go:117] "RemoveContainer" containerID="bdf170be16612ff8006e51412c9af2c34cf09e6db469635780f6dc5a2ea76f20" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.348764 4820 scope.go:117] "RemoveContainer" containerID="cab93bf98dd5daffd2433ee582a8708b834dbb042d829efa03eac43dcfc4f65e" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.463698 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" event={"ID":"fc5454df-b4c1-45f5-9021-a70a13b47b37","Type":"ContainerDied","Data":"e73368956e9d782635e7818d624ef1c83bdabb683241584226f4b71bb28a4434"} Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.463790 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e73368956e9d782635e7818d624ef1c83bdabb683241584226f4b71bb28a4434" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.463953 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.555630 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x"] Feb 03 12:45:30 crc kubenswrapper[4820]: E0203 12:45:30.556315 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08459177-65bc-4cf2-850b-3d8db214d191" containerName="collect-profiles" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.556345 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="08459177-65bc-4cf2-850b-3d8db214d191" containerName="collect-profiles" Feb 03 12:45:30 crc kubenswrapper[4820]: E0203 12:45:30.556367 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5454df-b4c1-45f5-9021-a70a13b47b37" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.556379 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5454df-b4c1-45f5-9021-a70a13b47b37" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.556647 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="08459177-65bc-4cf2-850b-3d8db214d191" containerName="collect-profiles" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.556683 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5454df-b4c1-45f5-9021-a70a13b47b37" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.557811 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.561344 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.562819 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.563109 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.567640 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.588057 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x"] Feb 03 12:45:30 crc kubenswrapper[4820]: E0203 12:45:30.654693 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc5454df_b4c1_45f5_9021_a70a13b47b37.slice/crio-e73368956e9d782635e7818d624ef1c83bdabb683241584226f4b71bb28a4434\": RecentStats: unable to find data in memory cache]" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.724786 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7gff\" (UniqueName: \"kubernetes.io/projected/ee96f9e1-369f-4e88-9766-419a9a05abe5-kube-api-access-q7gff\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x\" (UID: \"ee96f9e1-369f-4e88-9766-419a9a05abe5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.725207 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee96f9e1-369f-4e88-9766-419a9a05abe5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x\" (UID: \"ee96f9e1-369f-4e88-9766-419a9a05abe5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.725512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee96f9e1-369f-4e88-9766-419a9a05abe5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x\" (UID: \"ee96f9e1-369f-4e88-9766-419a9a05abe5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.827389 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7gff\" (UniqueName: \"kubernetes.io/projected/ee96f9e1-369f-4e88-9766-419a9a05abe5-kube-api-access-q7gff\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x\" (UID: \"ee96f9e1-369f-4e88-9766-419a9a05abe5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.827707 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee96f9e1-369f-4e88-9766-419a9a05abe5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x\" (UID: \"ee96f9e1-369f-4e88-9766-419a9a05abe5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.827962 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee96f9e1-369f-4e88-9766-419a9a05abe5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x\" (UID: \"ee96f9e1-369f-4e88-9766-419a9a05abe5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.833755 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee96f9e1-369f-4e88-9766-419a9a05abe5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x\" (UID: \"ee96f9e1-369f-4e88-9766-419a9a05abe5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.840947 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee96f9e1-369f-4e88-9766-419a9a05abe5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x\" (UID: \"ee96f9e1-369f-4e88-9766-419a9a05abe5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.848040 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7gff\" (UniqueName: \"kubernetes.io/projected/ee96f9e1-369f-4e88-9766-419a9a05abe5-kube-api-access-q7gff\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x\" (UID: \"ee96f9e1-369f-4e88-9766-419a9a05abe5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" Feb 03 12:45:30 crc kubenswrapper[4820]: I0203 12:45:30.888787 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" Feb 03 12:45:31 crc kubenswrapper[4820]: I0203 12:45:31.453236 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x"] Feb 03 12:45:31 crc kubenswrapper[4820]: I0203 12:45:31.481692 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" event={"ID":"ee96f9e1-369f-4e88-9766-419a9a05abe5","Type":"ContainerStarted","Data":"16581b381f5e71f8206d27c0c8fafc6bad17db9536de49af52fd273417704566"} Feb 03 12:45:32 crc kubenswrapper[4820]: I0203 12:45:32.033489 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gcx8s"] Feb 03 12:45:32 crc kubenswrapper[4820]: I0203 12:45:32.043183 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gcx8s"] Feb 03 12:45:32 crc kubenswrapper[4820]: I0203 12:45:32.512640 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" event={"ID":"ee96f9e1-369f-4e88-9766-419a9a05abe5","Type":"ContainerStarted","Data":"c6f5fde942bb3fdbe6719593572711653cd65b1896311f300bba3cc90b1e8815"} Feb 03 12:45:32 crc kubenswrapper[4820]: I0203 12:45:32.534434 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" podStartSLOduration=2.100518641 podStartE2EDuration="2.534395553s" podCreationTimestamp="2026-02-03 12:45:30 +0000 UTC" firstStartedPulling="2026-02-03 12:45:31.461337182 +0000 UTC m=+2448.984413046" lastFinishedPulling="2026-02-03 12:45:31.895214094 +0000 UTC m=+2449.418289958" observedRunningTime="2026-02-03 12:45:32.531324941 +0000 UTC m=+2450.054400805" watchObservedRunningTime="2026-02-03 12:45:32.534395553 +0000 UTC m=+2450.057471417" Feb 03 12:45:33 crc kubenswrapper[4820]: I0203 12:45:33.160033 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03344b7f-772a-4f59-9955-99a923bd9fee" path="/var/lib/kubelet/pods/03344b7f-772a-4f59-9955-99a923bd9fee/volumes" Feb 03 12:45:37 crc kubenswrapper[4820]: I0203 12:45:37.561369 4820 generic.go:334] "Generic (PLEG): container finished" podID="ee96f9e1-369f-4e88-9766-419a9a05abe5" containerID="c6f5fde942bb3fdbe6719593572711653cd65b1896311f300bba3cc90b1e8815" exitCode=0 Feb 03 12:45:37 crc kubenswrapper[4820]: I0203 12:45:37.561482 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" event={"ID":"ee96f9e1-369f-4e88-9766-419a9a05abe5","Type":"ContainerDied","Data":"c6f5fde942bb3fdbe6719593572711653cd65b1896311f300bba3cc90b1e8815"} Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.010667 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.096050 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee96f9e1-369f-4e88-9766-419a9a05abe5-inventory\") pod \"ee96f9e1-369f-4e88-9766-419a9a05abe5\" (UID: \"ee96f9e1-369f-4e88-9766-419a9a05abe5\") " Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.096222 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee96f9e1-369f-4e88-9766-419a9a05abe5-ssh-key-openstack-edpm-ipam\") pod \"ee96f9e1-369f-4e88-9766-419a9a05abe5\" (UID: \"ee96f9e1-369f-4e88-9766-419a9a05abe5\") " Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.096308 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7gff\" (UniqueName: \"kubernetes.io/projected/ee96f9e1-369f-4e88-9766-419a9a05abe5-kube-api-access-q7gff\") pod \"ee96f9e1-369f-4e88-9766-419a9a05abe5\" (UID: \"ee96f9e1-369f-4e88-9766-419a9a05abe5\") " Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.101706 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee96f9e1-369f-4e88-9766-419a9a05abe5-kube-api-access-q7gff" (OuterVolumeSpecName: "kube-api-access-q7gff") pod "ee96f9e1-369f-4e88-9766-419a9a05abe5" (UID: "ee96f9e1-369f-4e88-9766-419a9a05abe5"). InnerVolumeSpecName "kube-api-access-q7gff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.125814 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee96f9e1-369f-4e88-9766-419a9a05abe5-inventory" (OuterVolumeSpecName: "inventory") pod "ee96f9e1-369f-4e88-9766-419a9a05abe5" (UID: "ee96f9e1-369f-4e88-9766-419a9a05abe5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.144055 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:45:39 crc kubenswrapper[4820]: E0203 12:45:39.144424 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.145530 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee96f9e1-369f-4e88-9766-419a9a05abe5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ee96f9e1-369f-4e88-9766-419a9a05abe5" (UID: "ee96f9e1-369f-4e88-9766-419a9a05abe5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.199304 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee96f9e1-369f-4e88-9766-419a9a05abe5-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.199672 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee96f9e1-369f-4e88-9766-419a9a05abe5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.199701 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7gff\" (UniqueName: \"kubernetes.io/projected/ee96f9e1-369f-4e88-9766-419a9a05abe5-kube-api-access-q7gff\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.582372 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.582337 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x" event={"ID":"ee96f9e1-369f-4e88-9766-419a9a05abe5","Type":"ContainerDied","Data":"16581b381f5e71f8206d27c0c8fafc6bad17db9536de49af52fd273417704566"} Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.583102 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16581b381f5e71f8206d27c0c8fafc6bad17db9536de49af52fd273417704566" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.668228 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r"] Feb 03 12:45:39 crc kubenswrapper[4820]: E0203 12:45:39.668789 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee96f9e1-369f-4e88-9766-419a9a05abe5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.668807 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee96f9e1-369f-4e88-9766-419a9a05abe5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.669087 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee96f9e1-369f-4e88-9766-419a9a05abe5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.669920 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.672493 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.672637 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.673751 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.675903 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.683088 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r"] Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.712249 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklgf\" (UniqueName: \"kubernetes.io/projected/9311424c-1f4a-434d-8e8c-e5383453074c-kube-api-access-pklgf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qz9r\" (UID: \"9311424c-1f4a-434d-8e8c-e5383453074c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.712437 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9311424c-1f4a-434d-8e8c-e5383453074c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qz9r\" (UID: \"9311424c-1f4a-434d-8e8c-e5383453074c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.712477 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9311424c-1f4a-434d-8e8c-e5383453074c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qz9r\" (UID: \"9311424c-1f4a-434d-8e8c-e5383453074c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.814586 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pklgf\" (UniqueName: \"kubernetes.io/projected/9311424c-1f4a-434d-8e8c-e5383453074c-kube-api-access-pklgf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qz9r\" (UID: \"9311424c-1f4a-434d-8e8c-e5383453074c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.814686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9311424c-1f4a-434d-8e8c-e5383453074c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qz9r\" (UID: \"9311424c-1f4a-434d-8e8c-e5383453074c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.814724 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9311424c-1f4a-434d-8e8c-e5383453074c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qz9r\" (UID: \"9311424c-1f4a-434d-8e8c-e5383453074c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.820568 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9311424c-1f4a-434d-8e8c-e5383453074c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qz9r\" (UID: \"9311424c-1f4a-434d-8e8c-e5383453074c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.822997 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9311424c-1f4a-434d-8e8c-e5383453074c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qz9r\" (UID: \"9311424c-1f4a-434d-8e8c-e5383453074c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" Feb 03 12:45:39 crc kubenswrapper[4820]: I0203 12:45:39.833809 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pklgf\" (UniqueName: \"kubernetes.io/projected/9311424c-1f4a-434d-8e8c-e5383453074c-kube-api-access-pklgf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qz9r\" (UID: \"9311424c-1f4a-434d-8e8c-e5383453074c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" Feb 03 12:45:40 crc kubenswrapper[4820]: I0203 12:45:40.000621 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" Feb 03 12:45:40 crc kubenswrapper[4820]: I0203 12:45:40.530402 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r"] Feb 03 12:45:40 crc kubenswrapper[4820]: I0203 12:45:40.660683 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" event={"ID":"9311424c-1f4a-434d-8e8c-e5383453074c","Type":"ContainerStarted","Data":"d46799ee08c967ed06927e7fd361a56bba33dbf7b8fbd876cfbf5f248afbc91c"} Feb 03 12:45:41 crc kubenswrapper[4820]: I0203 12:45:41.672664 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" event={"ID":"9311424c-1f4a-434d-8e8c-e5383453074c","Type":"ContainerStarted","Data":"a572204978b4f40980b5609da0681335d12df1fb6f179b908de6d8db62346d4e"} Feb 03 12:45:41 crc kubenswrapper[4820]: I0203 12:45:41.811515 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" podStartSLOduration=2.337141263 podStartE2EDuration="2.811456264s" podCreationTimestamp="2026-02-03 12:45:39 +0000 UTC" firstStartedPulling="2026-02-03 12:45:40.532504518 +0000 UTC m=+2458.055580392" lastFinishedPulling="2026-02-03 12:45:41.006819529 +0000 UTC m=+2458.529895393" observedRunningTime="2026-02-03 12:45:41.689923951 +0000 UTC m=+2459.212999835" watchObservedRunningTime="2026-02-03 12:45:41.811456264 +0000 UTC m=+2459.334532148" Feb 03 12:45:50 crc kubenswrapper[4820]: I0203 12:45:50.143186 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:45:50 crc kubenswrapper[4820]: E0203 12:45:50.144024 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:45:54 crc kubenswrapper[4820]: I0203 12:45:54.065729 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-pl7pt"] Feb 03 12:45:54 crc kubenswrapper[4820]: I0203 12:45:54.077300 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-pl7pt"] Feb 03 12:45:55 crc kubenswrapper[4820]: I0203 12:45:55.162569 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff7cd9cd-238d-4f55-87f8-6d4f78e93e34" path="/var/lib/kubelet/pods/ff7cd9cd-238d-4f55-87f8-6d4f78e93e34/volumes" Feb 03 12:46:05 crc kubenswrapper[4820]: I0203 12:46:05.143576 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:46:05 crc kubenswrapper[4820]: E0203 12:46:05.144376 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:46:17 crc kubenswrapper[4820]: I0203 12:46:17.326677 4820 generic.go:334] "Generic (PLEG): container finished" podID="9311424c-1f4a-434d-8e8c-e5383453074c" containerID="a572204978b4f40980b5609da0681335d12df1fb6f179b908de6d8db62346d4e" exitCode=0 Feb 03 12:46:17 crc kubenswrapper[4820]: I0203 12:46:17.326758 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" event={"ID":"9311424c-1f4a-434d-8e8c-e5383453074c","Type":"ContainerDied","Data":"a572204978b4f40980b5609da0681335d12df1fb6f179b908de6d8db62346d4e"} Feb 03 12:46:18 crc kubenswrapper[4820]: I0203 12:46:18.768314 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" Feb 03 12:46:18 crc kubenswrapper[4820]: I0203 12:46:18.812816 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9311424c-1f4a-434d-8e8c-e5383453074c-inventory\") pod \"9311424c-1f4a-434d-8e8c-e5383453074c\" (UID: \"9311424c-1f4a-434d-8e8c-e5383453074c\") " Feb 03 12:46:18 crc kubenswrapper[4820]: I0203 12:46:18.812991 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pklgf\" (UniqueName: \"kubernetes.io/projected/9311424c-1f4a-434d-8e8c-e5383453074c-kube-api-access-pklgf\") pod \"9311424c-1f4a-434d-8e8c-e5383453074c\" (UID: \"9311424c-1f4a-434d-8e8c-e5383453074c\") " Feb 03 12:46:18 crc kubenswrapper[4820]: I0203 12:46:18.813100 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9311424c-1f4a-434d-8e8c-e5383453074c-ssh-key-openstack-edpm-ipam\") pod \"9311424c-1f4a-434d-8e8c-e5383453074c\" (UID: \"9311424c-1f4a-434d-8e8c-e5383453074c\") " Feb 03 12:46:18 crc kubenswrapper[4820]: I0203 12:46:18.820492 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9311424c-1f4a-434d-8e8c-e5383453074c-kube-api-access-pklgf" (OuterVolumeSpecName: "kube-api-access-pklgf") pod "9311424c-1f4a-434d-8e8c-e5383453074c" (UID: "9311424c-1f4a-434d-8e8c-e5383453074c"). InnerVolumeSpecName "kube-api-access-pklgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:46:18 crc kubenswrapper[4820]: I0203 12:46:18.847998 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9311424c-1f4a-434d-8e8c-e5383453074c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9311424c-1f4a-434d-8e8c-e5383453074c" (UID: "9311424c-1f4a-434d-8e8c-e5383453074c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:46:18 crc kubenswrapper[4820]: I0203 12:46:18.855272 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9311424c-1f4a-434d-8e8c-e5383453074c-inventory" (OuterVolumeSpecName: "inventory") pod "9311424c-1f4a-434d-8e8c-e5383453074c" (UID: "9311424c-1f4a-434d-8e8c-e5383453074c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:46:18 crc kubenswrapper[4820]: I0203 12:46:18.914661 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pklgf\" (UniqueName: \"kubernetes.io/projected/9311424c-1f4a-434d-8e8c-e5383453074c-kube-api-access-pklgf\") on node \"crc\" DevicePath \"\"" Feb 03 12:46:18 crc kubenswrapper[4820]: I0203 12:46:18.914708 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9311424c-1f4a-434d-8e8c-e5383453074c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:46:18 crc kubenswrapper[4820]: I0203 12:46:18.914722 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9311424c-1f4a-434d-8e8c-e5383453074c-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.143975 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:46:19 crc kubenswrapper[4820]: E0203 12:46:19.144449 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.350576 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" event={"ID":"9311424c-1f4a-434d-8e8c-e5383453074c","Type":"ContainerDied","Data":"d46799ee08c967ed06927e7fd361a56bba33dbf7b8fbd876cfbf5f248afbc91c"} Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.351203 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d46799ee08c967ed06927e7fd361a56bba33dbf7b8fbd876cfbf5f248afbc91c" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.350694 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qz9r" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.467561 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7"] Feb 03 12:46:19 crc kubenswrapper[4820]: E0203 12:46:19.468367 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9311424c-1f4a-434d-8e8c-e5383453074c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.468465 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9311424c-1f4a-434d-8e8c-e5383453074c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.468746 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9311424c-1f4a-434d-8e8c-e5383453074c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.470040 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.472427 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.472792 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.473004 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.473154 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.479195 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7"] Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.628621 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/126074cf-7213-48ec-8909-5a8286bb11b6-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7\" (UID: \"126074cf-7213-48ec-8909-5a8286bb11b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.628706 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrvkn\" (UniqueName: \"kubernetes.io/projected/126074cf-7213-48ec-8909-5a8286bb11b6-kube-api-access-lrvkn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7\" (UID: \"126074cf-7213-48ec-8909-5a8286bb11b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.629546 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/126074cf-7213-48ec-8909-5a8286bb11b6-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7\" (UID: \"126074cf-7213-48ec-8909-5a8286bb11b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.731533 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrvkn\" (UniqueName: \"kubernetes.io/projected/126074cf-7213-48ec-8909-5a8286bb11b6-kube-api-access-lrvkn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7\" (UID: \"126074cf-7213-48ec-8909-5a8286bb11b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.731606 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/126074cf-7213-48ec-8909-5a8286bb11b6-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7\" (UID: \"126074cf-7213-48ec-8909-5a8286bb11b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.731822 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/126074cf-7213-48ec-8909-5a8286bb11b6-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7\" (UID: \"126074cf-7213-48ec-8909-5a8286bb11b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.735563 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/126074cf-7213-48ec-8909-5a8286bb11b6-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7\" (UID: \"126074cf-7213-48ec-8909-5a8286bb11b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.735995 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/126074cf-7213-48ec-8909-5a8286bb11b6-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7\" (UID: \"126074cf-7213-48ec-8909-5a8286bb11b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.752540 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrvkn\" (UniqueName: \"kubernetes.io/projected/126074cf-7213-48ec-8909-5a8286bb11b6-kube-api-access-lrvkn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7\" (UID: \"126074cf-7213-48ec-8909-5a8286bb11b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" Feb 03 12:46:19 crc kubenswrapper[4820]: I0203 12:46:19.795608 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" Feb 03 12:46:20 crc kubenswrapper[4820]: I0203 12:46:20.334150 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7"] Feb 03 12:46:20 crc kubenswrapper[4820]: I0203 12:46:20.361736 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" event={"ID":"126074cf-7213-48ec-8909-5a8286bb11b6","Type":"ContainerStarted","Data":"036c2b73fcdd4e2186a93b69f6881aa97f7f8244d9fb0a9881a40aec32af8674"} Feb 03 12:46:21 crc kubenswrapper[4820]: I0203 12:46:21.373558 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" event={"ID":"126074cf-7213-48ec-8909-5a8286bb11b6","Type":"ContainerStarted","Data":"2018052ff21cb34c0b0f085c00a033cdb14d30ebb950ef6498d9d94028a84b7e"} Feb 03 12:46:21 crc kubenswrapper[4820]: I0203 12:46:21.401400 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" podStartSLOduration=1.999370455 podStartE2EDuration="2.401380668s" podCreationTimestamp="2026-02-03 12:46:19 +0000 UTC" firstStartedPulling="2026-02-03 12:46:20.341725876 +0000 UTC m=+2497.864801740" lastFinishedPulling="2026-02-03 12:46:20.743736079 +0000 UTC m=+2498.266811953" observedRunningTime="2026-02-03 12:46:21.397135731 +0000 UTC m=+2498.920211595" watchObservedRunningTime="2026-02-03 12:46:21.401380668 +0000 UTC m=+2498.924456522" Feb 03 12:46:30 crc kubenswrapper[4820]: I0203 12:46:30.143532 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:46:30 crc kubenswrapper[4820]: E0203 12:46:30.144545 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:46:30 crc kubenswrapper[4820]: I0203 12:46:30.597677 4820 scope.go:117] "RemoveContainer" containerID="17517eaf1b7daae15a9f186aa6d51c7fa4ac86a2ede0b331062db143c586a3f3" Feb 03 12:46:30 crc kubenswrapper[4820]: I0203 12:46:30.645561 4820 scope.go:117] "RemoveContainer" containerID="b265cd32d381c2044cc3e1ec2d613885bb76a81ac4e65500e57126961cc3884f" Feb 03 12:46:41 crc kubenswrapper[4820]: I0203 12:46:41.143244 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:46:41 crc kubenswrapper[4820]: E0203 12:46:41.144092 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:46:55 crc kubenswrapper[4820]: I0203 12:46:55.142663 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:46:55 crc kubenswrapper[4820]: E0203 12:46:55.143396 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:47:05 crc kubenswrapper[4820]: I0203 12:47:05.178048 4820 generic.go:334] "Generic (PLEG): container finished" podID="126074cf-7213-48ec-8909-5a8286bb11b6" containerID="2018052ff21cb34c0b0f085c00a033cdb14d30ebb950ef6498d9d94028a84b7e" exitCode=0 Feb 03 12:47:05 crc kubenswrapper[4820]: I0203 12:47:05.178368 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" event={"ID":"126074cf-7213-48ec-8909-5a8286bb11b6","Type":"ContainerDied","Data":"2018052ff21cb34c0b0f085c00a033cdb14d30ebb950ef6498d9d94028a84b7e"} Feb 03 12:47:06 crc kubenswrapper[4820]: I0203 12:47:06.596958 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" Feb 03 12:47:06 crc kubenswrapper[4820]: I0203 12:47:06.772055 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/126074cf-7213-48ec-8909-5a8286bb11b6-ssh-key-openstack-edpm-ipam\") pod \"126074cf-7213-48ec-8909-5a8286bb11b6\" (UID: \"126074cf-7213-48ec-8909-5a8286bb11b6\") " Feb 03 12:47:06 crc kubenswrapper[4820]: I0203 12:47:06.772313 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/126074cf-7213-48ec-8909-5a8286bb11b6-inventory\") pod \"126074cf-7213-48ec-8909-5a8286bb11b6\" (UID: \"126074cf-7213-48ec-8909-5a8286bb11b6\") " Feb 03 12:47:06 crc kubenswrapper[4820]: I0203 12:47:06.772435 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrvkn\" (UniqueName: \"kubernetes.io/projected/126074cf-7213-48ec-8909-5a8286bb11b6-kube-api-access-lrvkn\") pod \"126074cf-7213-48ec-8909-5a8286bb11b6\" (UID: \"126074cf-7213-48ec-8909-5a8286bb11b6\") " Feb 03 12:47:06 crc kubenswrapper[4820]: I0203 12:47:06.779856 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/126074cf-7213-48ec-8909-5a8286bb11b6-kube-api-access-lrvkn" (OuterVolumeSpecName: "kube-api-access-lrvkn") pod "126074cf-7213-48ec-8909-5a8286bb11b6" (UID: "126074cf-7213-48ec-8909-5a8286bb11b6"). InnerVolumeSpecName "kube-api-access-lrvkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:47:06 crc kubenswrapper[4820]: I0203 12:47:06.806065 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/126074cf-7213-48ec-8909-5a8286bb11b6-inventory" (OuterVolumeSpecName: "inventory") pod "126074cf-7213-48ec-8909-5a8286bb11b6" (UID: "126074cf-7213-48ec-8909-5a8286bb11b6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:06 crc kubenswrapper[4820]: I0203 12:47:06.814852 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/126074cf-7213-48ec-8909-5a8286bb11b6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "126074cf-7213-48ec-8909-5a8286bb11b6" (UID: "126074cf-7213-48ec-8909-5a8286bb11b6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:06 crc kubenswrapper[4820]: I0203 12:47:06.874241 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/126074cf-7213-48ec-8909-5a8286bb11b6-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:06 crc kubenswrapper[4820]: I0203 12:47:06.874278 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrvkn\" (UniqueName: \"kubernetes.io/projected/126074cf-7213-48ec-8909-5a8286bb11b6-kube-api-access-lrvkn\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:06 crc kubenswrapper[4820]: I0203 12:47:06.874289 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/126074cf-7213-48ec-8909-5a8286bb11b6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.211130 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" event={"ID":"126074cf-7213-48ec-8909-5a8286bb11b6","Type":"ContainerDied","Data":"036c2b73fcdd4e2186a93b69f6881aa97f7f8244d9fb0a9881a40aec32af8674"} Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.211181 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="036c2b73fcdd4e2186a93b69f6881aa97f7f8244d9fb0a9881a40aec32af8674" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.211201 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.468416 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-g9h74"] Feb 03 12:47:07 crc kubenswrapper[4820]: E0203 12:47:07.469003 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="126074cf-7213-48ec-8909-5a8286bb11b6" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.469028 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="126074cf-7213-48ec-8909-5a8286bb11b6" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.469278 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="126074cf-7213-48ec-8909-5a8286bb11b6" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.470231 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.473465 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.473531 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.473636 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.473977 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.480510 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-g9h74"] Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.649714 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xkr9\" (UniqueName: \"kubernetes.io/projected/dc7a208f-6c45-4374-ace1-70b2e16c499c-kube-api-access-7xkr9\") pod \"ssh-known-hosts-edpm-deployment-g9h74\" (UID: \"dc7a208f-6c45-4374-ace1-70b2e16c499c\") " pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.651165 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dc7a208f-6c45-4374-ace1-70b2e16c499c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-g9h74\" (UID: \"dc7a208f-6c45-4374-ace1-70b2e16c499c\") " pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.651829 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc7a208f-6c45-4374-ace1-70b2e16c499c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-g9h74\" (UID: \"dc7a208f-6c45-4374-ace1-70b2e16c499c\") " pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.752830 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc7a208f-6c45-4374-ace1-70b2e16c499c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-g9h74\" (UID: \"dc7a208f-6c45-4374-ace1-70b2e16c499c\") " pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.752912 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xkr9\" (UniqueName: \"kubernetes.io/projected/dc7a208f-6c45-4374-ace1-70b2e16c499c-kube-api-access-7xkr9\") pod \"ssh-known-hosts-edpm-deployment-g9h74\" (UID: \"dc7a208f-6c45-4374-ace1-70b2e16c499c\") " pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.752939 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dc7a208f-6c45-4374-ace1-70b2e16c499c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-g9h74\" (UID: \"dc7a208f-6c45-4374-ace1-70b2e16c499c\") " pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.758379 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dc7a208f-6c45-4374-ace1-70b2e16c499c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-g9h74\" (UID: \"dc7a208f-6c45-4374-ace1-70b2e16c499c\") " pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.767732 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc7a208f-6c45-4374-ace1-70b2e16c499c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-g9h74\" (UID: \"dc7a208f-6c45-4374-ace1-70b2e16c499c\") " pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.782128 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xkr9\" (UniqueName: \"kubernetes.io/projected/dc7a208f-6c45-4374-ace1-70b2e16c499c-kube-api-access-7xkr9\") pod \"ssh-known-hosts-edpm-deployment-g9h74\" (UID: \"dc7a208f-6c45-4374-ace1-70b2e16c499c\") " pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" Feb 03 12:47:07 crc kubenswrapper[4820]: I0203 12:47:07.802985 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" Feb 03 12:47:08 crc kubenswrapper[4820]: I0203 12:47:08.391511 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-g9h74"] Feb 03 12:47:08 crc kubenswrapper[4820]: W0203 12:47:08.395172 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc7a208f_6c45_4374_ace1_70b2e16c499c.slice/crio-48539be7222bdd9d6a41f68f278b315b1c47bb810865e690ec375f591a3a8db0 WatchSource:0}: Error finding container 48539be7222bdd9d6a41f68f278b315b1c47bb810865e690ec375f591a3a8db0: Status 404 returned error can't find the container with id 48539be7222bdd9d6a41f68f278b315b1c47bb810865e690ec375f591a3a8db0 Feb 03 12:47:09 crc kubenswrapper[4820]: I0203 12:47:09.234421 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" event={"ID":"dc7a208f-6c45-4374-ace1-70b2e16c499c","Type":"ContainerStarted","Data":"48539be7222bdd9d6a41f68f278b315b1c47bb810865e690ec375f591a3a8db0"} Feb 03 12:47:10 crc kubenswrapper[4820]: I0203 12:47:10.142540 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:47:10 crc kubenswrapper[4820]: E0203 12:47:10.142938 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:47:10 crc kubenswrapper[4820]: I0203 12:47:10.447510 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" event={"ID":"dc7a208f-6c45-4374-ace1-70b2e16c499c","Type":"ContainerStarted","Data":"e934b3ce1ad482920262d9da0264219a3cc7b6e4bace2dd238f7db978116123e"} Feb 03 12:47:10 crc kubenswrapper[4820]: I0203 12:47:10.470528 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" podStartSLOduration=3.039231309 podStartE2EDuration="3.470505061s" podCreationTimestamp="2026-02-03 12:47:07 +0000 UTC" firstStartedPulling="2026-02-03 12:47:08.39684565 +0000 UTC m=+2545.919921514" lastFinishedPulling="2026-02-03 12:47:08.828119382 +0000 UTC m=+2546.351195266" observedRunningTime="2026-02-03 12:47:10.463484824 +0000 UTC m=+2547.986560708" watchObservedRunningTime="2026-02-03 12:47:10.470505061 +0000 UTC m=+2547.993580925" Feb 03 12:47:16 crc kubenswrapper[4820]: I0203 12:47:16.848972 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc7a208f-6c45-4374-ace1-70b2e16c499c" containerID="e934b3ce1ad482920262d9da0264219a3cc7b6e4bace2dd238f7db978116123e" exitCode=0 Feb 03 12:47:16 crc kubenswrapper[4820]: I0203 12:47:16.849076 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" event={"ID":"dc7a208f-6c45-4374-ace1-70b2e16c499c","Type":"ContainerDied","Data":"e934b3ce1ad482920262d9da0264219a3cc7b6e4bace2dd238f7db978116123e"} Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.272752 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.313478 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc7a208f-6c45-4374-ace1-70b2e16c499c-ssh-key-openstack-edpm-ipam\") pod \"dc7a208f-6c45-4374-ace1-70b2e16c499c\" (UID: \"dc7a208f-6c45-4374-ace1-70b2e16c499c\") " Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.313705 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dc7a208f-6c45-4374-ace1-70b2e16c499c-inventory-0\") pod \"dc7a208f-6c45-4374-ace1-70b2e16c499c\" (UID: \"dc7a208f-6c45-4374-ace1-70b2e16c499c\") " Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.313755 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xkr9\" (UniqueName: \"kubernetes.io/projected/dc7a208f-6c45-4374-ace1-70b2e16c499c-kube-api-access-7xkr9\") pod \"dc7a208f-6c45-4374-ace1-70b2e16c499c\" (UID: \"dc7a208f-6c45-4374-ace1-70b2e16c499c\") " Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.324374 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7a208f-6c45-4374-ace1-70b2e16c499c-kube-api-access-7xkr9" (OuterVolumeSpecName: "kube-api-access-7xkr9") pod "dc7a208f-6c45-4374-ace1-70b2e16c499c" (UID: "dc7a208f-6c45-4374-ace1-70b2e16c499c"). InnerVolumeSpecName "kube-api-access-7xkr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.342435 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7a208f-6c45-4374-ace1-70b2e16c499c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dc7a208f-6c45-4374-ace1-70b2e16c499c" (UID: "dc7a208f-6c45-4374-ace1-70b2e16c499c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.347704 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7a208f-6c45-4374-ace1-70b2e16c499c-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "dc7a208f-6c45-4374-ace1-70b2e16c499c" (UID: "dc7a208f-6c45-4374-ace1-70b2e16c499c"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.416588 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc7a208f-6c45-4374-ace1-70b2e16c499c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.416629 4820 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dc7a208f-6c45-4374-ace1-70b2e16c499c-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.416644 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xkr9\" (UniqueName: \"kubernetes.io/projected/dc7a208f-6c45-4374-ace1-70b2e16c499c-kube-api-access-7xkr9\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.743587 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b6lbs"] Feb 03 12:47:18 crc kubenswrapper[4820]: E0203 12:47:18.744006 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7a208f-6c45-4374-ace1-70b2e16c499c" containerName="ssh-known-hosts-edpm-deployment" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.744029 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7a208f-6c45-4374-ace1-70b2e16c499c" containerName="ssh-known-hosts-edpm-deployment" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.744228 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7a208f-6c45-4374-ace1-70b2e16c499c" containerName="ssh-known-hosts-edpm-deployment" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.745686 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.754648 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b6lbs"] Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.791274 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwdq8\" (UniqueName: \"kubernetes.io/projected/94a946d7-59cf-49d2-872a-6ec409731e85-kube-api-access-hwdq8\") pod \"redhat-marketplace-b6lbs\" (UID: \"94a946d7-59cf-49d2-872a-6ec409731e85\") " pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.791334 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a946d7-59cf-49d2-872a-6ec409731e85-utilities\") pod \"redhat-marketplace-b6lbs\" (UID: \"94a946d7-59cf-49d2-872a-6ec409731e85\") " pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.791460 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a946d7-59cf-49d2-872a-6ec409731e85-catalog-content\") pod \"redhat-marketplace-b6lbs\" (UID: \"94a946d7-59cf-49d2-872a-6ec409731e85\") " pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.868026 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" event={"ID":"dc7a208f-6c45-4374-ace1-70b2e16c499c","Type":"ContainerDied","Data":"48539be7222bdd9d6a41f68f278b315b1c47bb810865e690ec375f591a3a8db0"} Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.868067 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48539be7222bdd9d6a41f68f278b315b1c47bb810865e690ec375f591a3a8db0" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.868116 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g9h74" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.893662 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a946d7-59cf-49d2-872a-6ec409731e85-catalog-content\") pod \"redhat-marketplace-b6lbs\" (UID: \"94a946d7-59cf-49d2-872a-6ec409731e85\") " pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.893786 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwdq8\" (UniqueName: \"kubernetes.io/projected/94a946d7-59cf-49d2-872a-6ec409731e85-kube-api-access-hwdq8\") pod \"redhat-marketplace-b6lbs\" (UID: \"94a946d7-59cf-49d2-872a-6ec409731e85\") " pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.893829 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a946d7-59cf-49d2-872a-6ec409731e85-utilities\") pod \"redhat-marketplace-b6lbs\" (UID: \"94a946d7-59cf-49d2-872a-6ec409731e85\") " pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.894534 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a946d7-59cf-49d2-872a-6ec409731e85-utilities\") pod \"redhat-marketplace-b6lbs\" (UID: \"94a946d7-59cf-49d2-872a-6ec409731e85\") " pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.894654 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a946d7-59cf-49d2-872a-6ec409731e85-catalog-content\") pod \"redhat-marketplace-b6lbs\" (UID: \"94a946d7-59cf-49d2-872a-6ec409731e85\") " pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.922246 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwdq8\" (UniqueName: \"kubernetes.io/projected/94a946d7-59cf-49d2-872a-6ec409731e85-kube-api-access-hwdq8\") pod \"redhat-marketplace-b6lbs\" (UID: \"94a946d7-59cf-49d2-872a-6ec409731e85\") " pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.974235 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl"] Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.975778 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.978250 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.978600 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.979082 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.979642 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.995535 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe0dcc37-428f-4efa-a725-e4361affcacd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzskl\" (UID: \"fe0dcc37-428f-4efa-a725-e4361affcacd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.995607 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hjzd\" (UniqueName: \"kubernetes.io/projected/fe0dcc37-428f-4efa-a725-e4361affcacd-kube-api-access-4hjzd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzskl\" (UID: \"fe0dcc37-428f-4efa-a725-e4361affcacd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" Feb 03 12:47:18 crc kubenswrapper[4820]: I0203 12:47:18.995689 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe0dcc37-428f-4efa-a725-e4361affcacd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzskl\" (UID: \"fe0dcc37-428f-4efa-a725-e4361affcacd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" Feb 03 12:47:19 crc kubenswrapper[4820]: I0203 12:47:19.014097 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl"] Feb 03 12:47:19 crc kubenswrapper[4820]: I0203 12:47:19.079935 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:19 crc kubenswrapper[4820]: I0203 12:47:19.097817 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe0dcc37-428f-4efa-a725-e4361affcacd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzskl\" (UID: \"fe0dcc37-428f-4efa-a725-e4361affcacd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" Feb 03 12:47:19 crc kubenswrapper[4820]: I0203 12:47:19.097904 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hjzd\" (UniqueName: \"kubernetes.io/projected/fe0dcc37-428f-4efa-a725-e4361affcacd-kube-api-access-4hjzd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzskl\" (UID: \"fe0dcc37-428f-4efa-a725-e4361affcacd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" Feb 03 12:47:19 crc kubenswrapper[4820]: I0203 12:47:19.097985 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe0dcc37-428f-4efa-a725-e4361affcacd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzskl\" (UID: \"fe0dcc37-428f-4efa-a725-e4361affcacd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" Feb 03 12:47:19 crc kubenswrapper[4820]: I0203 12:47:19.102322 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe0dcc37-428f-4efa-a725-e4361affcacd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzskl\" (UID: \"fe0dcc37-428f-4efa-a725-e4361affcacd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" Feb 03 12:47:19 crc kubenswrapper[4820]: I0203 12:47:19.103375 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe0dcc37-428f-4efa-a725-e4361affcacd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzskl\" (UID: \"fe0dcc37-428f-4efa-a725-e4361affcacd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" Feb 03 12:47:19 crc kubenswrapper[4820]: I0203 12:47:19.121695 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hjzd\" (UniqueName: \"kubernetes.io/projected/fe0dcc37-428f-4efa-a725-e4361affcacd-kube-api-access-4hjzd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzskl\" (UID: \"fe0dcc37-428f-4efa-a725-e4361affcacd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" Feb 03 12:47:19 crc kubenswrapper[4820]: I0203 12:47:19.295195 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" Feb 03 12:47:19 crc kubenswrapper[4820]: I0203 12:47:19.556634 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b6lbs"] Feb 03 12:47:19 crc kubenswrapper[4820]: W0203 12:47:19.872300 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe0dcc37_428f_4efa_a725_e4361affcacd.slice/crio-9da082c1f95b5aa6db9d2003def17e5c50acf1e95deb9fc960dc1fedb0f2c232 WatchSource:0}: Error finding container 9da082c1f95b5aa6db9d2003def17e5c50acf1e95deb9fc960dc1fedb0f2c232: Status 404 returned error can't find the container with id 9da082c1f95b5aa6db9d2003def17e5c50acf1e95deb9fc960dc1fedb0f2c232 Feb 03 12:47:19 crc kubenswrapper[4820]: I0203 12:47:19.874707 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl"] Feb 03 12:47:19 crc kubenswrapper[4820]: I0203 12:47:19.879903 4820 generic.go:334] "Generic (PLEG): container finished" podID="94a946d7-59cf-49d2-872a-6ec409731e85" containerID="f7cfe55f3a7c4320bc855404c24fb9f4f7c479100234901ec778cd27ec9fe65d" exitCode=0 Feb 03 12:47:19 crc kubenswrapper[4820]: I0203 12:47:19.879950 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b6lbs" event={"ID":"94a946d7-59cf-49d2-872a-6ec409731e85","Type":"ContainerDied","Data":"f7cfe55f3a7c4320bc855404c24fb9f4f7c479100234901ec778cd27ec9fe65d"} Feb 03 12:47:19 crc kubenswrapper[4820]: I0203 12:47:19.879980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b6lbs" event={"ID":"94a946d7-59cf-49d2-872a-6ec409731e85","Type":"ContainerStarted","Data":"34500c9d46868c4a4a7164f9d400dd3f3139df47f0d0d839fae510e5fdef133b"} Feb 03 12:47:20 crc kubenswrapper[4820]: I0203 12:47:20.893694 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" event={"ID":"fe0dcc37-428f-4efa-a725-e4361affcacd","Type":"ContainerStarted","Data":"c3f12b54c095e0373c56cf8f2aa9d0af966ee98866e24e9d3cffc171403c96c7"} Feb 03 12:47:20 crc kubenswrapper[4820]: I0203 12:47:20.894102 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" event={"ID":"fe0dcc37-428f-4efa-a725-e4361affcacd","Type":"ContainerStarted","Data":"9da082c1f95b5aa6db9d2003def17e5c50acf1e95deb9fc960dc1fedb0f2c232"} Feb 03 12:47:20 crc kubenswrapper[4820]: I0203 12:47:20.922209 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" podStartSLOduration=2.483853513 podStartE2EDuration="2.922189954s" podCreationTimestamp="2026-02-03 12:47:18 +0000 UTC" firstStartedPulling="2026-02-03 12:47:19.874309119 +0000 UTC m=+2557.397384983" lastFinishedPulling="2026-02-03 12:47:20.31264555 +0000 UTC m=+2557.835721424" observedRunningTime="2026-02-03 12:47:20.916805727 +0000 UTC m=+2558.439881601" watchObservedRunningTime="2026-02-03 12:47:20.922189954 +0000 UTC m=+2558.445265818" Feb 03 12:47:21 crc kubenswrapper[4820]: I0203 12:47:21.906550 4820 generic.go:334] "Generic (PLEG): container finished" podID="94a946d7-59cf-49d2-872a-6ec409731e85" containerID="0c51567a12bbcfd06796ddbec154fdf6c097c7140405948cd3fadcef0a4f0251" exitCode=0 Feb 03 12:47:21 crc kubenswrapper[4820]: I0203 12:47:21.906655 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b6lbs" event={"ID":"94a946d7-59cf-49d2-872a-6ec409731e85","Type":"ContainerDied","Data":"0c51567a12bbcfd06796ddbec154fdf6c097c7140405948cd3fadcef0a4f0251"} Feb 03 12:47:22 crc kubenswrapper[4820]: I0203 12:47:22.143231 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:47:22 crc kubenswrapper[4820]: E0203 12:47:22.143583 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:47:22 crc kubenswrapper[4820]: I0203 12:47:22.917790 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b6lbs" event={"ID":"94a946d7-59cf-49d2-872a-6ec409731e85","Type":"ContainerStarted","Data":"89ef2206d982f6655d05da2d295216a92845f295c2a455c08877479305657b7e"} Feb 03 12:47:22 crc kubenswrapper[4820]: I0203 12:47:22.941188 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b6lbs" podStartSLOduration=2.450242935 podStartE2EDuration="4.941164024s" podCreationTimestamp="2026-02-03 12:47:18 +0000 UTC" firstStartedPulling="2026-02-03 12:47:19.881656365 +0000 UTC m=+2557.404732229" lastFinishedPulling="2026-02-03 12:47:22.372577454 +0000 UTC m=+2559.895653318" observedRunningTime="2026-02-03 12:47:22.933809908 +0000 UTC m=+2560.456885802" watchObservedRunningTime="2026-02-03 12:47:22.941164024 +0000 UTC m=+2560.464239888" Feb 03 12:47:29 crc kubenswrapper[4820]: I0203 12:47:29.080801 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:29 crc kubenswrapper[4820]: I0203 12:47:29.081400 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:29 crc kubenswrapper[4820]: I0203 12:47:29.125067 4820 generic.go:334] "Generic (PLEG): container finished" podID="fe0dcc37-428f-4efa-a725-e4361affcacd" containerID="c3f12b54c095e0373c56cf8f2aa9d0af966ee98866e24e9d3cffc171403c96c7" exitCode=0 Feb 03 12:47:29 crc kubenswrapper[4820]: I0203 12:47:29.125243 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" event={"ID":"fe0dcc37-428f-4efa-a725-e4361affcacd","Type":"ContainerDied","Data":"c3f12b54c095e0373c56cf8f2aa9d0af966ee98866e24e9d3cffc171403c96c7"} Feb 03 12:47:29 crc kubenswrapper[4820]: I0203 12:47:29.138597 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:29 crc kubenswrapper[4820]: I0203 12:47:29.208157 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:29 crc kubenswrapper[4820]: I0203 12:47:29.388158 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b6lbs"] Feb 03 12:47:30 crc kubenswrapper[4820]: I0203 12:47:30.964219 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.139812 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe0dcc37-428f-4efa-a725-e4361affcacd-inventory\") pod \"fe0dcc37-428f-4efa-a725-e4361affcacd\" (UID: \"fe0dcc37-428f-4efa-a725-e4361affcacd\") " Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.139945 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe0dcc37-428f-4efa-a725-e4361affcacd-ssh-key-openstack-edpm-ipam\") pod \"fe0dcc37-428f-4efa-a725-e4361affcacd\" (UID: \"fe0dcc37-428f-4efa-a725-e4361affcacd\") " Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.140105 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hjzd\" (UniqueName: \"kubernetes.io/projected/fe0dcc37-428f-4efa-a725-e4361affcacd-kube-api-access-4hjzd\") pod \"fe0dcc37-428f-4efa-a725-e4361affcacd\" (UID: \"fe0dcc37-428f-4efa-a725-e4361affcacd\") " Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.150294 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe0dcc37-428f-4efa-a725-e4361affcacd-kube-api-access-4hjzd" (OuterVolumeSpecName: "kube-api-access-4hjzd") pod "fe0dcc37-428f-4efa-a725-e4361affcacd" (UID: "fe0dcc37-428f-4efa-a725-e4361affcacd"). InnerVolumeSpecName "kube-api-access-4hjzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.172553 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0dcc37-428f-4efa-a725-e4361affcacd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fe0dcc37-428f-4efa-a725-e4361affcacd" (UID: "fe0dcc37-428f-4efa-a725-e4361affcacd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.190019 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0dcc37-428f-4efa-a725-e4361affcacd-inventory" (OuterVolumeSpecName: "inventory") pod "fe0dcc37-428f-4efa-a725-e4361affcacd" (UID: "fe0dcc37-428f-4efa-a725-e4361affcacd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.335465 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe0dcc37-428f-4efa-a725-e4361affcacd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.335710 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hjzd\" (UniqueName: \"kubernetes.io/projected/fe0dcc37-428f-4efa-a725-e4361affcacd-kube-api-access-4hjzd\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.335721 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe0dcc37-428f-4efa-a725-e4361affcacd-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.359662 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b6lbs" podUID="94a946d7-59cf-49d2-872a-6ec409731e85" containerName="registry-server" containerID="cri-o://89ef2206d982f6655d05da2d295216a92845f295c2a455c08877479305657b7e" gracePeriod=2 Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.360106 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.383495 4820 kubelet_pods.go:2476] "Failed to reduce cpu time for pod pending volume cleanup" podUID="fe0dcc37-428f-4efa-a725-e4361affcacd" err="openat2 /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe0dcc37_428f_4efa_a725_e4361affcacd.slice/cgroup.controllers: no such file or directory" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.383597 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzskl" event={"ID":"fe0dcc37-428f-4efa-a725-e4361affcacd","Type":"ContainerDied","Data":"9da082c1f95b5aa6db9d2003def17e5c50acf1e95deb9fc960dc1fedb0f2c232"} Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.383624 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9da082c1f95b5aa6db9d2003def17e5c50acf1e95deb9fc960dc1fedb0f2c232" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.513694 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr"] Feb 03 12:47:31 crc kubenswrapper[4820]: E0203 12:47:31.514409 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0dcc37-428f-4efa-a725-e4361affcacd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.514430 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0dcc37-428f-4efa-a725-e4361affcacd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.514631 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe0dcc37-428f-4efa-a725-e4361affcacd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.515670 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.519949 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.520155 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.520213 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.520400 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.525533 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr"] Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.643788 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02202494-64ad-452c-ad31-b76746e7e746-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr\" (UID: \"02202494-64ad-452c-ad31-b76746e7e746\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.643877 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02202494-64ad-452c-ad31-b76746e7e746-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr\" (UID: \"02202494-64ad-452c-ad31-b76746e7e746\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.644022 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8xbm\" (UniqueName: \"kubernetes.io/projected/02202494-64ad-452c-ad31-b76746e7e746-kube-api-access-j8xbm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr\" (UID: \"02202494-64ad-452c-ad31-b76746e7e746\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.750615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8xbm\" (UniqueName: \"kubernetes.io/projected/02202494-64ad-452c-ad31-b76746e7e746-kube-api-access-j8xbm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr\" (UID: \"02202494-64ad-452c-ad31-b76746e7e746\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.751005 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02202494-64ad-452c-ad31-b76746e7e746-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr\" (UID: \"02202494-64ad-452c-ad31-b76746e7e746\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.751088 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02202494-64ad-452c-ad31-b76746e7e746-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr\" (UID: \"02202494-64ad-452c-ad31-b76746e7e746\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.755653 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02202494-64ad-452c-ad31-b76746e7e746-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr\" (UID: \"02202494-64ad-452c-ad31-b76746e7e746\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.755802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02202494-64ad-452c-ad31-b76746e7e746-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr\" (UID: \"02202494-64ad-452c-ad31-b76746e7e746\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.760367 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.771096 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8xbm\" (UniqueName: \"kubernetes.io/projected/02202494-64ad-452c-ad31-b76746e7e746-kube-api-access-j8xbm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr\" (UID: \"02202494-64ad-452c-ad31-b76746e7e746\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.853154 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a946d7-59cf-49d2-872a-6ec409731e85-utilities\") pod \"94a946d7-59cf-49d2-872a-6ec409731e85\" (UID: \"94a946d7-59cf-49d2-872a-6ec409731e85\") " Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.853270 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwdq8\" (UniqueName: \"kubernetes.io/projected/94a946d7-59cf-49d2-872a-6ec409731e85-kube-api-access-hwdq8\") pod \"94a946d7-59cf-49d2-872a-6ec409731e85\" (UID: \"94a946d7-59cf-49d2-872a-6ec409731e85\") " Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.854510 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a946d7-59cf-49d2-872a-6ec409731e85-utilities" (OuterVolumeSpecName: "utilities") pod "94a946d7-59cf-49d2-872a-6ec409731e85" (UID: "94a946d7-59cf-49d2-872a-6ec409731e85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.857454 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a946d7-59cf-49d2-872a-6ec409731e85-kube-api-access-hwdq8" (OuterVolumeSpecName: "kube-api-access-hwdq8") pod "94a946d7-59cf-49d2-872a-6ec409731e85" (UID: "94a946d7-59cf-49d2-872a-6ec409731e85"). InnerVolumeSpecName "kube-api-access-hwdq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.857712 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.958093 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a946d7-59cf-49d2-872a-6ec409731e85-catalog-content\") pod \"94a946d7-59cf-49d2-872a-6ec409731e85\" (UID: \"94a946d7-59cf-49d2-872a-6ec409731e85\") " Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.958975 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwdq8\" (UniqueName: \"kubernetes.io/projected/94a946d7-59cf-49d2-872a-6ec409731e85-kube-api-access-hwdq8\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.959023 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a946d7-59cf-49d2-872a-6ec409731e85-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:31 crc kubenswrapper[4820]: I0203 12:47:31.984968 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a946d7-59cf-49d2-872a-6ec409731e85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a946d7-59cf-49d2-872a-6ec409731e85" (UID: "94a946d7-59cf-49d2-872a-6ec409731e85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.060791 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a946d7-59cf-49d2-872a-6ec409731e85-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.370648 4820 generic.go:334] "Generic (PLEG): container finished" podID="94a946d7-59cf-49d2-872a-6ec409731e85" containerID="89ef2206d982f6655d05da2d295216a92845f295c2a455c08877479305657b7e" exitCode=0 Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.370695 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b6lbs" event={"ID":"94a946d7-59cf-49d2-872a-6ec409731e85","Type":"ContainerDied","Data":"89ef2206d982f6655d05da2d295216a92845f295c2a455c08877479305657b7e"} Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.370725 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b6lbs" event={"ID":"94a946d7-59cf-49d2-872a-6ec409731e85","Type":"ContainerDied","Data":"34500c9d46868c4a4a7164f9d400dd3f3139df47f0d0d839fae510e5fdef133b"} Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.370732 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b6lbs" Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.370746 4820 scope.go:117] "RemoveContainer" containerID="89ef2206d982f6655d05da2d295216a92845f295c2a455c08877479305657b7e" Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.396777 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr"] Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.403327 4820 scope.go:117] "RemoveContainer" containerID="0c51567a12bbcfd06796ddbec154fdf6c097c7140405948cd3fadcef0a4f0251" Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.420474 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.425435 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b6lbs"] Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.734642 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b6lbs"] Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.773245 4820 scope.go:117] "RemoveContainer" containerID="f7cfe55f3a7c4320bc855404c24fb9f4f7c479100234901ec778cd27ec9fe65d" Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.794047 4820 scope.go:117] "RemoveContainer" containerID="89ef2206d982f6655d05da2d295216a92845f295c2a455c08877479305657b7e" Feb 03 12:47:32 crc kubenswrapper[4820]: E0203 12:47:32.802423 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89ef2206d982f6655d05da2d295216a92845f295c2a455c08877479305657b7e\": container with ID starting with 89ef2206d982f6655d05da2d295216a92845f295c2a455c08877479305657b7e not found: ID does not exist" containerID="89ef2206d982f6655d05da2d295216a92845f295c2a455c08877479305657b7e" Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.802549 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89ef2206d982f6655d05da2d295216a92845f295c2a455c08877479305657b7e"} err="failed to get container status \"89ef2206d982f6655d05da2d295216a92845f295c2a455c08877479305657b7e\": rpc error: code = NotFound desc = could not find container \"89ef2206d982f6655d05da2d295216a92845f295c2a455c08877479305657b7e\": container with ID starting with 89ef2206d982f6655d05da2d295216a92845f295c2a455c08877479305657b7e not found: ID does not exist" Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.802680 4820 scope.go:117] "RemoveContainer" containerID="0c51567a12bbcfd06796ddbec154fdf6c097c7140405948cd3fadcef0a4f0251" Feb 03 12:47:32 crc kubenswrapper[4820]: E0203 12:47:32.805054 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c51567a12bbcfd06796ddbec154fdf6c097c7140405948cd3fadcef0a4f0251\": container with ID starting with 0c51567a12bbcfd06796ddbec154fdf6c097c7140405948cd3fadcef0a4f0251 not found: ID does not exist" containerID="0c51567a12bbcfd06796ddbec154fdf6c097c7140405948cd3fadcef0a4f0251" Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.805100 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c51567a12bbcfd06796ddbec154fdf6c097c7140405948cd3fadcef0a4f0251"} err="failed to get container status \"0c51567a12bbcfd06796ddbec154fdf6c097c7140405948cd3fadcef0a4f0251\": rpc error: code = NotFound desc = could not find container \"0c51567a12bbcfd06796ddbec154fdf6c097c7140405948cd3fadcef0a4f0251\": container with ID starting with 0c51567a12bbcfd06796ddbec154fdf6c097c7140405948cd3fadcef0a4f0251 not found: ID does not exist" Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.805134 4820 scope.go:117] "RemoveContainer" containerID="f7cfe55f3a7c4320bc855404c24fb9f4f7c479100234901ec778cd27ec9fe65d" Feb 03 12:47:32 crc kubenswrapper[4820]: E0203 12:47:32.805587 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7cfe55f3a7c4320bc855404c24fb9f4f7c479100234901ec778cd27ec9fe65d\": container with ID starting with f7cfe55f3a7c4320bc855404c24fb9f4f7c479100234901ec778cd27ec9fe65d not found: ID does not exist" containerID="f7cfe55f3a7c4320bc855404c24fb9f4f7c479100234901ec778cd27ec9fe65d" Feb 03 12:47:32 crc kubenswrapper[4820]: I0203 12:47:32.805630 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7cfe55f3a7c4320bc855404c24fb9f4f7c479100234901ec778cd27ec9fe65d"} err="failed to get container status \"f7cfe55f3a7c4320bc855404c24fb9f4f7c479100234901ec778cd27ec9fe65d\": rpc error: code = NotFound desc = could not find container \"f7cfe55f3a7c4320bc855404c24fb9f4f7c479100234901ec778cd27ec9fe65d\": container with ID starting with f7cfe55f3a7c4320bc855404c24fb9f4f7c479100234901ec778cd27ec9fe65d not found: ID does not exist" Feb 03 12:47:33 crc kubenswrapper[4820]: I0203 12:47:33.200994 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a946d7-59cf-49d2-872a-6ec409731e85" path="/var/lib/kubelet/pods/94a946d7-59cf-49d2-872a-6ec409731e85/volumes" Feb 03 12:47:33 crc kubenswrapper[4820]: I0203 12:47:33.386549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" event={"ID":"02202494-64ad-452c-ad31-b76746e7e746","Type":"ContainerStarted","Data":"b4e828a625d58c9d2c2216482a5c5e49c8c81b7087419a0e4d158ab5fef762f3"} Feb 03 12:47:33 crc kubenswrapper[4820]: I0203 12:47:33.412045 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" podStartSLOduration=1.718816365 podStartE2EDuration="2.412018622s" podCreationTimestamp="2026-02-03 12:47:31 +0000 UTC" firstStartedPulling="2026-02-03 12:47:32.420184142 +0000 UTC m=+2569.943260006" lastFinishedPulling="2026-02-03 12:47:33.113386389 +0000 UTC m=+2570.636462263" observedRunningTime="2026-02-03 12:47:33.403708832 +0000 UTC m=+2570.926784696" watchObservedRunningTime="2026-02-03 12:47:33.412018622 +0000 UTC m=+2570.935094506" Feb 03 12:47:34 crc kubenswrapper[4820]: I0203 12:47:34.397776 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" event={"ID":"02202494-64ad-452c-ad31-b76746e7e746","Type":"ContainerStarted","Data":"09222ae061066cc5b723eb4749904eef6b9b707330fcb03859943c009456ab33"} Feb 03 12:47:36 crc kubenswrapper[4820]: I0203 12:47:36.143214 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:47:36 crc kubenswrapper[4820]: E0203 12:47:36.143863 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:47:43 crc kubenswrapper[4820]: I0203 12:47:43.491985 4820 generic.go:334] "Generic (PLEG): container finished" podID="02202494-64ad-452c-ad31-b76746e7e746" containerID="09222ae061066cc5b723eb4749904eef6b9b707330fcb03859943c009456ab33" exitCode=0 Feb 03 12:47:43 crc kubenswrapper[4820]: I0203 12:47:43.492068 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" event={"ID":"02202494-64ad-452c-ad31-b76746e7e746","Type":"ContainerDied","Data":"09222ae061066cc5b723eb4749904eef6b9b707330fcb03859943c009456ab33"} Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.160516 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.261178 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02202494-64ad-452c-ad31-b76746e7e746-ssh-key-openstack-edpm-ipam\") pod \"02202494-64ad-452c-ad31-b76746e7e746\" (UID: \"02202494-64ad-452c-ad31-b76746e7e746\") " Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.261373 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02202494-64ad-452c-ad31-b76746e7e746-inventory\") pod \"02202494-64ad-452c-ad31-b76746e7e746\" (UID: \"02202494-64ad-452c-ad31-b76746e7e746\") " Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.261442 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8xbm\" (UniqueName: \"kubernetes.io/projected/02202494-64ad-452c-ad31-b76746e7e746-kube-api-access-j8xbm\") pod \"02202494-64ad-452c-ad31-b76746e7e746\" (UID: \"02202494-64ad-452c-ad31-b76746e7e746\") " Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.267977 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02202494-64ad-452c-ad31-b76746e7e746-kube-api-access-j8xbm" (OuterVolumeSpecName: "kube-api-access-j8xbm") pod "02202494-64ad-452c-ad31-b76746e7e746" (UID: "02202494-64ad-452c-ad31-b76746e7e746"). InnerVolumeSpecName "kube-api-access-j8xbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.293427 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02202494-64ad-452c-ad31-b76746e7e746-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "02202494-64ad-452c-ad31-b76746e7e746" (UID: "02202494-64ad-452c-ad31-b76746e7e746"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.296157 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02202494-64ad-452c-ad31-b76746e7e746-inventory" (OuterVolumeSpecName: "inventory") pod "02202494-64ad-452c-ad31-b76746e7e746" (UID: "02202494-64ad-452c-ad31-b76746e7e746"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.364657 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02202494-64ad-452c-ad31-b76746e7e746-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.364705 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8xbm\" (UniqueName: \"kubernetes.io/projected/02202494-64ad-452c-ad31-b76746e7e746-kube-api-access-j8xbm\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.364723 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02202494-64ad-452c-ad31-b76746e7e746-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.531288 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" event={"ID":"02202494-64ad-452c-ad31-b76746e7e746","Type":"ContainerDied","Data":"b4e828a625d58c9d2c2216482a5c5e49c8c81b7087419a0e4d158ab5fef762f3"} Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.531335 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4e828a625d58c9d2c2216482a5c5e49c8c81b7087419a0e4d158ab5fef762f3" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.531405 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.689237 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv"] Feb 03 12:47:45 crc kubenswrapper[4820]: E0203 12:47:45.689635 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a946d7-59cf-49d2-872a-6ec409731e85" containerName="extract-content" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.689653 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a946d7-59cf-49d2-872a-6ec409731e85" containerName="extract-content" Feb 03 12:47:45 crc kubenswrapper[4820]: E0203 12:47:45.689667 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a946d7-59cf-49d2-872a-6ec409731e85" containerName="registry-server" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.689675 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a946d7-59cf-49d2-872a-6ec409731e85" containerName="registry-server" Feb 03 12:47:45 crc kubenswrapper[4820]: E0203 12:47:45.689714 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a946d7-59cf-49d2-872a-6ec409731e85" containerName="extract-utilities" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.689721 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a946d7-59cf-49d2-872a-6ec409731e85" containerName="extract-utilities" Feb 03 12:47:45 crc kubenswrapper[4820]: E0203 12:47:45.689738 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02202494-64ad-452c-ad31-b76746e7e746" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.689745 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="02202494-64ad-452c-ad31-b76746e7e746" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.689992 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a946d7-59cf-49d2-872a-6ec409731e85" containerName="registry-server" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.690014 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="02202494-64ad-452c-ad31-b76746e7e746" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.690948 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.694772 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.695544 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.695764 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.697830 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.698151 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.698416 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.698562 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.699024 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.710076 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv"] Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.974842 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.974935 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.975017 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.975091 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.975164 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.975341 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.975381 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.975427 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.975515 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.975536 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmj55\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-kube-api-access-zmj55\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.975614 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.975763 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.975837 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:45 crc kubenswrapper[4820]: I0203 12:47:45.975875 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.078504 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.078612 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.078647 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.078682 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.078729 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.078748 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.078776 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.078804 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.078831 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.078868 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.078907 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.078930 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.078971 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.078988 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmj55\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-kube-api-access-zmj55\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.084761 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.085213 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.085254 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.085648 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.086364 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.086565 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.086711 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.086827 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.086983 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.088383 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.088400 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.089198 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.091087 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.098762 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmj55\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-kube-api-access-zmj55\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:46 crc kubenswrapper[4820]: I0203 12:47:46.328944 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:47:47 crc kubenswrapper[4820]: I0203 12:47:47.098624 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv"] Feb 03 12:47:47 crc kubenswrapper[4820]: I0203 12:47:47.142440 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:47:47 crc kubenswrapper[4820]: E0203 12:47:47.142756 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:47:47 crc kubenswrapper[4820]: I0203 12:47:47.553243 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" event={"ID":"27a58bb7-ce09-4c16-b190-071c1c506a14","Type":"ContainerStarted","Data":"2c5f1f021a2bfaf68d38dc0953dde1828a355397d415b1f1754da9b6d6d71974"} Feb 03 12:47:48 crc kubenswrapper[4820]: I0203 12:47:48.570946 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" event={"ID":"27a58bb7-ce09-4c16-b190-071c1c506a14","Type":"ContainerStarted","Data":"46b7c324f650c3ea74e17c52e40f504ec20b9bcc27a1985b0676d2df14cf3a8b"} Feb 03 12:47:48 crc kubenswrapper[4820]: I0203 12:47:48.599381 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" podStartSLOduration=3.129840219 podStartE2EDuration="3.599346016s" podCreationTimestamp="2026-02-03 12:47:45 +0000 UTC" firstStartedPulling="2026-02-03 12:47:47.100375248 +0000 UTC m=+2584.623451112" lastFinishedPulling="2026-02-03 12:47:47.569881045 +0000 UTC m=+2585.092956909" observedRunningTime="2026-02-03 12:47:48.598856603 +0000 UTC m=+2586.121932527" watchObservedRunningTime="2026-02-03 12:47:48.599346016 +0000 UTC m=+2586.122421960" Feb 03 12:48:01 crc kubenswrapper[4820]: I0203 12:48:01.143275 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:48:01 crc kubenswrapper[4820]: E0203 12:48:01.144141 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:48:16 crc kubenswrapper[4820]: I0203 12:48:16.143542 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:48:16 crc kubenswrapper[4820]: I0203 12:48:16.988505 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"f3746dd5b1ad5044813133335761f15989349419ea5f7ec425a2bd58c8519583"} Feb 03 12:48:25 crc kubenswrapper[4820]: I0203 12:48:25.090527 4820 generic.go:334] "Generic (PLEG): container finished" podID="27a58bb7-ce09-4c16-b190-071c1c506a14" containerID="46b7c324f650c3ea74e17c52e40f504ec20b9bcc27a1985b0676d2df14cf3a8b" exitCode=0 Feb 03 12:48:25 crc kubenswrapper[4820]: I0203 12:48:25.090737 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" event={"ID":"27a58bb7-ce09-4c16-b190-071c1c506a14","Type":"ContainerDied","Data":"46b7c324f650c3ea74e17c52e40f504ec20b9bcc27a1985b0676d2df14cf3a8b"} Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.587200 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.606226 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-nova-combined-ca-bundle\") pod \"27a58bb7-ce09-4c16-b190-071c1c506a14\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.606309 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-bootstrap-combined-ca-bundle\") pod \"27a58bb7-ce09-4c16-b190-071c1c506a14\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.606332 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-telemetry-combined-ca-bundle\") pod \"27a58bb7-ce09-4c16-b190-071c1c506a14\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.606512 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"27a58bb7-ce09-4c16-b190-071c1c506a14\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.606577 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-ssh-key-openstack-edpm-ipam\") pod \"27a58bb7-ce09-4c16-b190-071c1c506a14\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.606672 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-ovn-default-certs-0\") pod \"27a58bb7-ce09-4c16-b190-071c1c506a14\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.606696 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"27a58bb7-ce09-4c16-b190-071c1c506a14\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.606732 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"27a58bb7-ce09-4c16-b190-071c1c506a14\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.606771 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-inventory\") pod \"27a58bb7-ce09-4c16-b190-071c1c506a14\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.606800 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-libvirt-combined-ca-bundle\") pod \"27a58bb7-ce09-4c16-b190-071c1c506a14\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.606851 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-repo-setup-combined-ca-bundle\") pod \"27a58bb7-ce09-4c16-b190-071c1c506a14\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.606947 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-neutron-metadata-combined-ca-bundle\") pod \"27a58bb7-ce09-4c16-b190-071c1c506a14\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.606992 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmj55\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-kube-api-access-zmj55\") pod \"27a58bb7-ce09-4c16-b190-071c1c506a14\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.625326 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "27a58bb7-ce09-4c16-b190-071c1c506a14" (UID: "27a58bb7-ce09-4c16-b190-071c1c506a14"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.630152 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "27a58bb7-ce09-4c16-b190-071c1c506a14" (UID: "27a58bb7-ce09-4c16-b190-071c1c506a14"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.630208 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-kube-api-access-zmj55" (OuterVolumeSpecName: "kube-api-access-zmj55") pod "27a58bb7-ce09-4c16-b190-071c1c506a14" (UID: "27a58bb7-ce09-4c16-b190-071c1c506a14"). InnerVolumeSpecName "kube-api-access-zmj55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.635106 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "27a58bb7-ce09-4c16-b190-071c1c506a14" (UID: "27a58bb7-ce09-4c16-b190-071c1c506a14"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.635277 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "27a58bb7-ce09-4c16-b190-071c1c506a14" (UID: "27a58bb7-ce09-4c16-b190-071c1c506a14"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.645392 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "27a58bb7-ce09-4c16-b190-071c1c506a14" (UID: "27a58bb7-ce09-4c16-b190-071c1c506a14"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.660153 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "27a58bb7-ce09-4c16-b190-071c1c506a14" (UID: "27a58bb7-ce09-4c16-b190-071c1c506a14"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.660332 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "27a58bb7-ce09-4c16-b190-071c1c506a14" (UID: "27a58bb7-ce09-4c16-b190-071c1c506a14"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.660404 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "27a58bb7-ce09-4c16-b190-071c1c506a14" (UID: "27a58bb7-ce09-4c16-b190-071c1c506a14"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.660471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "27a58bb7-ce09-4c16-b190-071c1c506a14" (UID: "27a58bb7-ce09-4c16-b190-071c1c506a14"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.660172 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "27a58bb7-ce09-4c16-b190-071c1c506a14" (UID: "27a58bb7-ce09-4c16-b190-071c1c506a14"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.691092 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27a58bb7-ce09-4c16-b190-071c1c506a14" (UID: "27a58bb7-ce09-4c16-b190-071c1c506a14"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.706067 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-inventory" (OuterVolumeSpecName: "inventory") pod "27a58bb7-ce09-4c16-b190-071c1c506a14" (UID: "27a58bb7-ce09-4c16-b190-071c1c506a14"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.708750 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-ovn-combined-ca-bundle\") pod \"27a58bb7-ce09-4c16-b190-071c1c506a14\" (UID: \"27a58bb7-ce09-4c16-b190-071c1c506a14\") " Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.712774 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.712818 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmj55\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-kube-api-access-zmj55\") on node \"crc\" DevicePath \"\"" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.712831 4820 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.712845 4820 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.712853 4820 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.712865 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.712875 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.712898 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.712906 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.712918 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/27a58bb7-ce09-4c16-b190-071c1c506a14-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.712934 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.712945 4820 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.712956 4820 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.725624 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "27a58bb7-ce09-4c16-b190-071c1c506a14" (UID: "27a58bb7-ce09-4c16-b190-071c1c506a14"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:48:26 crc kubenswrapper[4820]: I0203 12:48:26.816372 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a58bb7-ce09-4c16-b190-071c1c506a14-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.111216 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" event={"ID":"27a58bb7-ce09-4c16-b190-071c1c506a14","Type":"ContainerDied","Data":"2c5f1f021a2bfaf68d38dc0953dde1828a355397d415b1f1754da9b6d6d71974"} Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.111273 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c5f1f021a2bfaf68d38dc0953dde1828a355397d415b1f1754da9b6d6d71974" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.111327 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.610590 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn"] Feb 03 12:48:27 crc kubenswrapper[4820]: E0203 12:48:27.613926 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a58bb7-ce09-4c16-b190-071c1c506a14" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.613956 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a58bb7-ce09-4c16-b190-071c1c506a14" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.614263 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a58bb7-ce09-4c16-b190-071c1c506a14" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.615640 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.618684 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.618723 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.618859 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.618974 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.619179 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.622111 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn"] Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.790940 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.791097 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.791131 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.791529 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.791684 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx9s2\" (UniqueName: \"kubernetes.io/projected/ffae89cd-1189-4722-8b80-6bf2a67f5dde-kube-api-access-sx9s2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.893388 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.893446 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.893558 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.893632 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx9s2\" (UniqueName: \"kubernetes.io/projected/ffae89cd-1189-4722-8b80-6bf2a67f5dde-kube-api-access-sx9s2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.893746 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.894374 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.899705 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.900093 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.905548 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.917250 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx9s2\" (UniqueName: \"kubernetes.io/projected/ffae89cd-1189-4722-8b80-6bf2a67f5dde-kube-api-access-sx9s2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hx7dn\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:27 crc kubenswrapper[4820]: I0203 12:48:27.946574 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:48:28 crc kubenswrapper[4820]: I0203 12:48:28.637091 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn"] Feb 03 12:48:28 crc kubenswrapper[4820]: W0203 12:48:28.643333 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffae89cd_1189_4722_8b80_6bf2a67f5dde.slice/crio-dc98ccac619322ed2a7a1ebc8e58fc1b9d9fb22890fce7b8855b8b2d464623b9 WatchSource:0}: Error finding container dc98ccac619322ed2a7a1ebc8e58fc1b9d9fb22890fce7b8855b8b2d464623b9: Status 404 returned error can't find the container with id dc98ccac619322ed2a7a1ebc8e58fc1b9d9fb22890fce7b8855b8b2d464623b9 Feb 03 12:48:29 crc kubenswrapper[4820]: I0203 12:48:29.134366 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" event={"ID":"ffae89cd-1189-4722-8b80-6bf2a67f5dde","Type":"ContainerStarted","Data":"dc98ccac619322ed2a7a1ebc8e58fc1b9d9fb22890fce7b8855b8b2d464623b9"} Feb 03 12:48:30 crc kubenswrapper[4820]: I0203 12:48:30.146490 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" event={"ID":"ffae89cd-1189-4722-8b80-6bf2a67f5dde","Type":"ContainerStarted","Data":"a6fb56741bddaee2539764da29d3e784e678dceefd978b9e3ddff51f8a71ee38"} Feb 03 12:48:30 crc kubenswrapper[4820]: I0203 12:48:30.181395 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" podStartSLOduration=2.525640891 podStartE2EDuration="3.181369451s" podCreationTimestamp="2026-02-03 12:48:27 +0000 UTC" firstStartedPulling="2026-02-03 12:48:28.646649061 +0000 UTC m=+2626.169724935" lastFinishedPulling="2026-02-03 12:48:29.302377631 +0000 UTC m=+2626.825453495" observedRunningTime="2026-02-03 12:48:30.168211929 +0000 UTC m=+2627.691287793" watchObservedRunningTime="2026-02-03 12:48:30.181369451 +0000 UTC m=+2627.704445315" Feb 03 12:49:31 crc kubenswrapper[4820]: I0203 12:49:31.062498 4820 generic.go:334] "Generic (PLEG): container finished" podID="ffae89cd-1189-4722-8b80-6bf2a67f5dde" containerID="a6fb56741bddaee2539764da29d3e784e678dceefd978b9e3ddff51f8a71ee38" exitCode=0 Feb 03 12:49:31 crc kubenswrapper[4820]: I0203 12:49:31.062658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" event={"ID":"ffae89cd-1189-4722-8b80-6bf2a67f5dde","Type":"ContainerDied","Data":"a6fb56741bddaee2539764da29d3e784e678dceefd978b9e3ddff51f8a71ee38"} Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.516316 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.621760 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ovncontroller-config-0\") pod \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.621822 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-inventory\") pod \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.621903 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx9s2\" (UniqueName: \"kubernetes.io/projected/ffae89cd-1189-4722-8b80-6bf2a67f5dde-kube-api-access-sx9s2\") pod \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.621967 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ssh-key-openstack-edpm-ipam\") pod \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.622109 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ovn-combined-ca-bundle\") pod \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\" (UID: \"ffae89cd-1189-4722-8b80-6bf2a67f5dde\") " Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.629578 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffae89cd-1189-4722-8b80-6bf2a67f5dde-kube-api-access-sx9s2" (OuterVolumeSpecName: "kube-api-access-sx9s2") pod "ffae89cd-1189-4722-8b80-6bf2a67f5dde" (UID: "ffae89cd-1189-4722-8b80-6bf2a67f5dde"). InnerVolumeSpecName "kube-api-access-sx9s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.632773 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ffae89cd-1189-4722-8b80-6bf2a67f5dde" (UID: "ffae89cd-1189-4722-8b80-6bf2a67f5dde"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.653744 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "ffae89cd-1189-4722-8b80-6bf2a67f5dde" (UID: "ffae89cd-1189-4722-8b80-6bf2a67f5dde"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.657548 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-inventory" (OuterVolumeSpecName: "inventory") pod "ffae89cd-1189-4722-8b80-6bf2a67f5dde" (UID: "ffae89cd-1189-4722-8b80-6bf2a67f5dde"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.662555 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ffae89cd-1189-4722-8b80-6bf2a67f5dde" (UID: "ffae89cd-1189-4722-8b80-6bf2a67f5dde"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.725091 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.725163 4820 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.725187 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.725203 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx9s2\" (UniqueName: \"kubernetes.io/projected/ffae89cd-1189-4722-8b80-6bf2a67f5dde-kube-api-access-sx9s2\") on node \"crc\" DevicePath \"\"" Feb 03 12:49:32 crc kubenswrapper[4820]: I0203 12:49:32.725223 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffae89cd-1189-4722-8b80-6bf2a67f5dde-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.092110 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" event={"ID":"ffae89cd-1189-4722-8b80-6bf2a67f5dde","Type":"ContainerDied","Data":"dc98ccac619322ed2a7a1ebc8e58fc1b9d9fb22890fce7b8855b8b2d464623b9"} Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.092740 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc98ccac619322ed2a7a1ebc8e58fc1b9d9fb22890fce7b8855b8b2d464623b9" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.092175 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hx7dn" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.192190 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc"] Feb 03 12:49:33 crc kubenswrapper[4820]: E0203 12:49:33.192801 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffae89cd-1189-4722-8b80-6bf2a67f5dde" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.192829 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffae89cd-1189-4722-8b80-6bf2a67f5dde" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.193118 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffae89cd-1189-4722-8b80-6bf2a67f5dde" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.194140 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.197463 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.197762 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.198035 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.198198 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.198478 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.199250 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.206859 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc"] Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.338490 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.339137 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvdnl\" (UniqueName: \"kubernetes.io/projected/0c9770c6-0c7f-4195-99d7-a9f7074e0236-kube-api-access-nvdnl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.339279 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.339344 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.339467 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.339558 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.441191 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.441269 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.441330 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.441390 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.441440 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.441494 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvdnl\" (UniqueName: \"kubernetes.io/projected/0c9770c6-0c7f-4195-99d7-a9f7074e0236-kube-api-access-nvdnl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.446628 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.447187 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.449014 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.449163 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.454526 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.469175 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvdnl\" (UniqueName: \"kubernetes.io/projected/0c9770c6-0c7f-4195-99d7-a9f7074e0236-kube-api-access-nvdnl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:33 crc kubenswrapper[4820]: I0203 12:49:33.518537 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:49:34 crc kubenswrapper[4820]: I0203 12:49:34.061315 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc"] Feb 03 12:49:34 crc kubenswrapper[4820]: I0203 12:49:34.101881 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" event={"ID":"0c9770c6-0c7f-4195-99d7-a9f7074e0236","Type":"ContainerStarted","Data":"d87aec2ffbdae5d35601c398d825886d312833d3eba809f2e105d3c46df73d80"} Feb 03 12:49:35 crc kubenswrapper[4820]: I0203 12:49:35.112574 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" event={"ID":"0c9770c6-0c7f-4195-99d7-a9f7074e0236","Type":"ContainerStarted","Data":"a0806d5561e71c54c973577730230b6104f7bacee1bfce9bf8d62c8924345413"} Feb 03 12:49:35 crc kubenswrapper[4820]: I0203 12:49:35.135733 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" podStartSLOduration=1.6520305130000001 podStartE2EDuration="2.135711656s" podCreationTimestamp="2026-02-03 12:49:33 +0000 UTC" firstStartedPulling="2026-02-03 12:49:34.064886576 +0000 UTC m=+2691.587962440" lastFinishedPulling="2026-02-03 12:49:34.548567699 +0000 UTC m=+2692.071643583" observedRunningTime="2026-02-03 12:49:35.12951741 +0000 UTC m=+2692.652593274" watchObservedRunningTime="2026-02-03 12:49:35.135711656 +0000 UTC m=+2692.658787530" Feb 03 12:50:26 crc kubenswrapper[4820]: I0203 12:50:26.080394 4820 generic.go:334] "Generic (PLEG): container finished" podID="0c9770c6-0c7f-4195-99d7-a9f7074e0236" containerID="a0806d5561e71c54c973577730230b6104f7bacee1bfce9bf8d62c8924345413" exitCode=0 Feb 03 12:50:26 crc kubenswrapper[4820]: I0203 12:50:26.081119 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" event={"ID":"0c9770c6-0c7f-4195-99d7-a9f7074e0236","Type":"ContainerDied","Data":"a0806d5561e71c54c973577730230b6104f7bacee1bfce9bf8d62c8924345413"} Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.596293 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.729526 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-neutron-ovn-metadata-agent-neutron-config-0\") pod \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.729607 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-inventory\") pod \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.729693 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-neutron-metadata-combined-ca-bundle\") pod \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.729727 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-nova-metadata-neutron-config-0\") pod \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.730037 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvdnl\" (UniqueName: \"kubernetes.io/projected/0c9770c6-0c7f-4195-99d7-a9f7074e0236-kube-api-access-nvdnl\") pod \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.730131 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-ssh-key-openstack-edpm-ipam\") pod \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\" (UID: \"0c9770c6-0c7f-4195-99d7-a9f7074e0236\") " Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.737529 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0c9770c6-0c7f-4195-99d7-a9f7074e0236" (UID: "0c9770c6-0c7f-4195-99d7-a9f7074e0236"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.737902 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c9770c6-0c7f-4195-99d7-a9f7074e0236-kube-api-access-nvdnl" (OuterVolumeSpecName: "kube-api-access-nvdnl") pod "0c9770c6-0c7f-4195-99d7-a9f7074e0236" (UID: "0c9770c6-0c7f-4195-99d7-a9f7074e0236"). InnerVolumeSpecName "kube-api-access-nvdnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.763874 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "0c9770c6-0c7f-4195-99d7-a9f7074e0236" (UID: "0c9770c6-0c7f-4195-99d7-a9f7074e0236"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.768090 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0c9770c6-0c7f-4195-99d7-a9f7074e0236" (UID: "0c9770c6-0c7f-4195-99d7-a9f7074e0236"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.770524 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-inventory" (OuterVolumeSpecName: "inventory") pod "0c9770c6-0c7f-4195-99d7-a9f7074e0236" (UID: "0c9770c6-0c7f-4195-99d7-a9f7074e0236"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.771116 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "0c9770c6-0c7f-4195-99d7-a9f7074e0236" (UID: "0c9770c6-0c7f-4195-99d7-a9f7074e0236"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.837972 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.838010 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.838023 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.838035 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.838045 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c9770c6-0c7f-4195-99d7-a9f7074e0236-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:50:27 crc kubenswrapper[4820]: I0203 12:50:27.838056 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvdnl\" (UniqueName: \"kubernetes.io/projected/0c9770c6-0c7f-4195-99d7-a9f7074e0236-kube-api-access-nvdnl\") on node \"crc\" DevicePath \"\"" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.101371 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" event={"ID":"0c9770c6-0c7f-4195-99d7-a9f7074e0236","Type":"ContainerDied","Data":"d87aec2ffbdae5d35601c398d825886d312833d3eba809f2e105d3c46df73d80"} Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.101406 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d87aec2ffbdae5d35601c398d825886d312833d3eba809f2e105d3c46df73d80" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.101736 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.611545 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5"] Feb 03 12:50:28 crc kubenswrapper[4820]: E0203 12:50:28.613105 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9770c6-0c7f-4195-99d7-a9f7074e0236" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.613162 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9770c6-0c7f-4195-99d7-a9f7074e0236" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.614118 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c9770c6-0c7f-4195-99d7-a9f7074e0236" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.615498 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.619415 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.623213 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.626294 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.627964 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.628004 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.638049 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5"] Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.786871 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67rnl\" (UniqueName: \"kubernetes.io/projected/772be0ab-717e-4a25-a481-95a4b1cd0c07-kube-api-access-67rnl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.786993 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.787237 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.787582 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.787660 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.890037 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.890185 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.890232 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.890346 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67rnl\" (UniqueName: \"kubernetes.io/projected/772be0ab-717e-4a25-a481-95a4b1cd0c07-kube-api-access-67rnl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.890376 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.910904 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.918245 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.918331 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.918789 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.921380 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67rnl\" (UniqueName: \"kubernetes.io/projected/772be0ab-717e-4a25-a481-95a4b1cd0c07-kube-api-access-67rnl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:28 crc kubenswrapper[4820]: I0203 12:50:28.945597 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:50:29 crc kubenswrapper[4820]: I0203 12:50:29.521324 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5"] Feb 03 12:50:30 crc kubenswrapper[4820]: I0203 12:50:30.126967 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" event={"ID":"772be0ab-717e-4a25-a481-95a4b1cd0c07","Type":"ContainerStarted","Data":"f4591158c246ad3eb3e7cd1c2bd1d7b8a1249491c21c7f0e1926c675443b6cdb"} Feb 03 12:50:31 crc kubenswrapper[4820]: I0203 12:50:31.234441 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" event={"ID":"772be0ab-717e-4a25-a481-95a4b1cd0c07","Type":"ContainerStarted","Data":"9f32ee95d3ca2e074c09fef455a9cd175f434e0abd870d026ce76e42326a2054"} Feb 03 12:50:31 crc kubenswrapper[4820]: I0203 12:50:31.258800 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" podStartSLOduration=2.7994424049999997 podStartE2EDuration="3.258782892s" podCreationTimestamp="2026-02-03 12:50:28 +0000 UTC" firstStartedPulling="2026-02-03 12:50:29.524227432 +0000 UTC m=+2747.047303286" lastFinishedPulling="2026-02-03 12:50:29.983567899 +0000 UTC m=+2747.506643773" observedRunningTime="2026-02-03 12:50:31.256557032 +0000 UTC m=+2748.779632896" watchObservedRunningTime="2026-02-03 12:50:31.258782892 +0000 UTC m=+2748.781858756" Feb 03 12:50:31 crc kubenswrapper[4820]: I0203 12:50:31.366176 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:50:31 crc kubenswrapper[4820]: I0203 12:50:31.366271 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:51:01 crc kubenswrapper[4820]: I0203 12:51:01.365678 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:51:01 crc kubenswrapper[4820]: I0203 12:51:01.366133 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:51:31 crc kubenswrapper[4820]: I0203 12:51:31.365646 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:51:31 crc kubenswrapper[4820]: I0203 12:51:31.366196 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:51:31 crc kubenswrapper[4820]: I0203 12:51:31.366261 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:51:31 crc kubenswrapper[4820]: I0203 12:51:31.367341 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3746dd5b1ad5044813133335761f15989349419ea5f7ec425a2bd58c8519583"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:51:31 crc kubenswrapper[4820]: I0203 12:51:31.367411 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://f3746dd5b1ad5044813133335761f15989349419ea5f7ec425a2bd58c8519583" gracePeriod=600 Feb 03 12:51:31 crc kubenswrapper[4820]: I0203 12:51:31.714120 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="f3746dd5b1ad5044813133335761f15989349419ea5f7ec425a2bd58c8519583" exitCode=0 Feb 03 12:51:31 crc kubenswrapper[4820]: I0203 12:51:31.714166 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"f3746dd5b1ad5044813133335761f15989349419ea5f7ec425a2bd58c8519583"} Feb 03 12:51:31 crc kubenswrapper[4820]: I0203 12:51:31.714466 4820 scope.go:117] "RemoveContainer" containerID="245331ecea7ae2a2547766477b0478b53c635e1745e222cb8a06ff036be8bc77" Feb 03 12:51:32 crc kubenswrapper[4820]: I0203 12:51:32.727053 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38"} Feb 03 12:52:45 crc kubenswrapper[4820]: I0203 12:52:45.953209 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cbmkv"] Feb 03 12:52:45 crc kubenswrapper[4820]: I0203 12:52:45.956390 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:52:45 crc kubenswrapper[4820]: I0203 12:52:45.977284 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbmkv"] Feb 03 12:52:46 crc kubenswrapper[4820]: I0203 12:52:46.135354 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-utilities\") pod \"redhat-operators-cbmkv\" (UID: \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\") " pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:52:46 crc kubenswrapper[4820]: I0203 12:52:46.135797 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-catalog-content\") pod \"redhat-operators-cbmkv\" (UID: \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\") " pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:52:46 crc kubenswrapper[4820]: I0203 12:52:46.135836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjpnt\" (UniqueName: \"kubernetes.io/projected/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-kube-api-access-mjpnt\") pod \"redhat-operators-cbmkv\" (UID: \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\") " pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:52:46 crc kubenswrapper[4820]: I0203 12:52:46.237563 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-catalog-content\") pod \"redhat-operators-cbmkv\" (UID: \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\") " pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:52:46 crc kubenswrapper[4820]: I0203 12:52:46.237625 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjpnt\" (UniqueName: \"kubernetes.io/projected/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-kube-api-access-mjpnt\") pod \"redhat-operators-cbmkv\" (UID: \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\") " pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:52:46 crc kubenswrapper[4820]: I0203 12:52:46.237764 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-utilities\") pod \"redhat-operators-cbmkv\" (UID: \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\") " pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:52:46 crc kubenswrapper[4820]: I0203 12:52:46.238424 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-utilities\") pod \"redhat-operators-cbmkv\" (UID: \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\") " pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:52:46 crc kubenswrapper[4820]: I0203 12:52:46.239035 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-catalog-content\") pod \"redhat-operators-cbmkv\" (UID: \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\") " pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:52:46 crc kubenswrapper[4820]: I0203 12:52:46.264804 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjpnt\" (UniqueName: \"kubernetes.io/projected/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-kube-api-access-mjpnt\") pod \"redhat-operators-cbmkv\" (UID: \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\") " pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:52:46 crc kubenswrapper[4820]: I0203 12:52:46.361158 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:52:46 crc kubenswrapper[4820]: I0203 12:52:46.887835 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbmkv"] Feb 03 12:52:47 crc kubenswrapper[4820]: I0203 12:52:47.052530 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbmkv" event={"ID":"1f69d2ce-a2c0-43df-a770-70e531c2b1e2","Type":"ContainerStarted","Data":"b28b9c5181984ad11eb2ad904bf484494dacf682dea979420c7e006390739004"} Feb 03 12:52:48 crc kubenswrapper[4820]: I0203 12:52:48.065002 4820 generic.go:334] "Generic (PLEG): container finished" podID="1f69d2ce-a2c0-43df-a770-70e531c2b1e2" containerID="5bfe5555076e82544aa95439abd87c5310991c74bf88c2cbacdde9bd8ab43339" exitCode=0 Feb 03 12:52:48 crc kubenswrapper[4820]: I0203 12:52:48.065284 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbmkv" event={"ID":"1f69d2ce-a2c0-43df-a770-70e531c2b1e2","Type":"ContainerDied","Data":"5bfe5555076e82544aa95439abd87c5310991c74bf88c2cbacdde9bd8ab43339"} Feb 03 12:52:48 crc kubenswrapper[4820]: I0203 12:52:48.067560 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:52:49 crc kubenswrapper[4820]: I0203 12:52:49.077601 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbmkv" event={"ID":"1f69d2ce-a2c0-43df-a770-70e531c2b1e2","Type":"ContainerStarted","Data":"b81d5b22c74eb9bd79200deee7843bc6eb58d22fb47c9ba1cd7490936f6bcbbe"} Feb 03 12:52:52 crc kubenswrapper[4820]: I0203 12:52:52.106118 4820 generic.go:334] "Generic (PLEG): container finished" podID="1f69d2ce-a2c0-43df-a770-70e531c2b1e2" containerID="b81d5b22c74eb9bd79200deee7843bc6eb58d22fb47c9ba1cd7490936f6bcbbe" exitCode=0 Feb 03 12:52:52 crc kubenswrapper[4820]: I0203 12:52:52.106181 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbmkv" event={"ID":"1f69d2ce-a2c0-43df-a770-70e531c2b1e2","Type":"ContainerDied","Data":"b81d5b22c74eb9bd79200deee7843bc6eb58d22fb47c9ba1cd7490936f6bcbbe"} Feb 03 12:52:53 crc kubenswrapper[4820]: I0203 12:52:53.118774 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbmkv" event={"ID":"1f69d2ce-a2c0-43df-a770-70e531c2b1e2","Type":"ContainerStarted","Data":"beb10cb04dcef2233d093c1108e2011268aa86eacb58ab52c76b448c7da9f72a"} Feb 03 12:52:56 crc kubenswrapper[4820]: I0203 12:52:56.361549 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:52:56 crc kubenswrapper[4820]: I0203 12:52:56.361908 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:52:57 crc kubenswrapper[4820]: I0203 12:52:57.411388 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cbmkv" podUID="1f69d2ce-a2c0-43df-a770-70e531c2b1e2" containerName="registry-server" probeResult="failure" output=< Feb 03 12:52:57 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 12:52:57 crc kubenswrapper[4820]: > Feb 03 12:53:06 crc kubenswrapper[4820]: I0203 12:53:06.413085 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:53:06 crc kubenswrapper[4820]: I0203 12:53:06.454348 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cbmkv" podStartSLOduration=17.002734924 podStartE2EDuration="21.454306605s" podCreationTimestamp="2026-02-03 12:52:45 +0000 UTC" firstStartedPulling="2026-02-03 12:52:48.067070407 +0000 UTC m=+2885.590146271" lastFinishedPulling="2026-02-03 12:52:52.518642088 +0000 UTC m=+2890.041717952" observedRunningTime="2026-02-03 12:52:53.159149543 +0000 UTC m=+2890.682225457" watchObservedRunningTime="2026-02-03 12:53:06.454306605 +0000 UTC m=+2903.977382469" Feb 03 12:53:06 crc kubenswrapper[4820]: I0203 12:53:06.486444 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:53:06 crc kubenswrapper[4820]: I0203 12:53:06.664400 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbmkv"] Feb 03 12:53:08 crc kubenswrapper[4820]: I0203 12:53:08.300462 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cbmkv" podUID="1f69d2ce-a2c0-43df-a770-70e531c2b1e2" containerName="registry-server" containerID="cri-o://beb10cb04dcef2233d093c1108e2011268aa86eacb58ab52c76b448c7da9f72a" gracePeriod=2 Feb 03 12:53:08 crc kubenswrapper[4820]: I0203 12:53:08.751539 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:53:08 crc kubenswrapper[4820]: I0203 12:53:08.926919 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-catalog-content\") pod \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\" (UID: \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\") " Feb 03 12:53:08 crc kubenswrapper[4820]: I0203 12:53:08.927418 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-utilities\") pod \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\" (UID: \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\") " Feb 03 12:53:08 crc kubenswrapper[4820]: I0203 12:53:08.927653 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjpnt\" (UniqueName: \"kubernetes.io/projected/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-kube-api-access-mjpnt\") pod \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\" (UID: \"1f69d2ce-a2c0-43df-a770-70e531c2b1e2\") " Feb 03 12:53:08 crc kubenswrapper[4820]: I0203 12:53:08.928171 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-utilities" (OuterVolumeSpecName: "utilities") pod "1f69d2ce-a2c0-43df-a770-70e531c2b1e2" (UID: "1f69d2ce-a2c0-43df-a770-70e531c2b1e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:53:08 crc kubenswrapper[4820]: I0203 12:53:08.928871 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:53:08 crc kubenswrapper[4820]: I0203 12:53:08.941801 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-kube-api-access-mjpnt" (OuterVolumeSpecName: "kube-api-access-mjpnt") pod "1f69d2ce-a2c0-43df-a770-70e531c2b1e2" (UID: "1f69d2ce-a2c0-43df-a770-70e531c2b1e2"). InnerVolumeSpecName "kube-api-access-mjpnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.031053 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjpnt\" (UniqueName: \"kubernetes.io/projected/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-kube-api-access-mjpnt\") on node \"crc\" DevicePath \"\"" Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.058847 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f69d2ce-a2c0-43df-a770-70e531c2b1e2" (UID: "1f69d2ce-a2c0-43df-a770-70e531c2b1e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.133299 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f69d2ce-a2c0-43df-a770-70e531c2b1e2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.316934 4820 generic.go:334] "Generic (PLEG): container finished" podID="1f69d2ce-a2c0-43df-a770-70e531c2b1e2" containerID="beb10cb04dcef2233d093c1108e2011268aa86eacb58ab52c76b448c7da9f72a" exitCode=0 Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.316990 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbmkv" event={"ID":"1f69d2ce-a2c0-43df-a770-70e531c2b1e2","Type":"ContainerDied","Data":"beb10cb04dcef2233d093c1108e2011268aa86eacb58ab52c76b448c7da9f72a"} Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.317056 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbmkv" event={"ID":"1f69d2ce-a2c0-43df-a770-70e531c2b1e2","Type":"ContainerDied","Data":"b28b9c5181984ad11eb2ad904bf484494dacf682dea979420c7e006390739004"} Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.317082 4820 scope.go:117] "RemoveContainer" containerID="beb10cb04dcef2233d093c1108e2011268aa86eacb58ab52c76b448c7da9f72a" Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.317053 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbmkv" Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.350844 4820 scope.go:117] "RemoveContainer" containerID="b81d5b22c74eb9bd79200deee7843bc6eb58d22fb47c9ba1cd7490936f6bcbbe" Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.352555 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbmkv"] Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.365658 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cbmkv"] Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.376320 4820 scope.go:117] "RemoveContainer" containerID="5bfe5555076e82544aa95439abd87c5310991c74bf88c2cbacdde9bd8ab43339" Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.421744 4820 scope.go:117] "RemoveContainer" containerID="beb10cb04dcef2233d093c1108e2011268aa86eacb58ab52c76b448c7da9f72a" Feb 03 12:53:09 crc kubenswrapper[4820]: E0203 12:53:09.422319 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beb10cb04dcef2233d093c1108e2011268aa86eacb58ab52c76b448c7da9f72a\": container with ID starting with beb10cb04dcef2233d093c1108e2011268aa86eacb58ab52c76b448c7da9f72a not found: ID does not exist" containerID="beb10cb04dcef2233d093c1108e2011268aa86eacb58ab52c76b448c7da9f72a" Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.422382 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beb10cb04dcef2233d093c1108e2011268aa86eacb58ab52c76b448c7da9f72a"} err="failed to get container status \"beb10cb04dcef2233d093c1108e2011268aa86eacb58ab52c76b448c7da9f72a\": rpc error: code = NotFound desc = could not find container \"beb10cb04dcef2233d093c1108e2011268aa86eacb58ab52c76b448c7da9f72a\": container with ID starting with beb10cb04dcef2233d093c1108e2011268aa86eacb58ab52c76b448c7da9f72a not found: ID does not exist" Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.422419 4820 scope.go:117] "RemoveContainer" containerID="b81d5b22c74eb9bd79200deee7843bc6eb58d22fb47c9ba1cd7490936f6bcbbe" Feb 03 12:53:09 crc kubenswrapper[4820]: E0203 12:53:09.422716 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b81d5b22c74eb9bd79200deee7843bc6eb58d22fb47c9ba1cd7490936f6bcbbe\": container with ID starting with b81d5b22c74eb9bd79200deee7843bc6eb58d22fb47c9ba1cd7490936f6bcbbe not found: ID does not exist" containerID="b81d5b22c74eb9bd79200deee7843bc6eb58d22fb47c9ba1cd7490936f6bcbbe" Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.422742 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81d5b22c74eb9bd79200deee7843bc6eb58d22fb47c9ba1cd7490936f6bcbbe"} err="failed to get container status \"b81d5b22c74eb9bd79200deee7843bc6eb58d22fb47c9ba1cd7490936f6bcbbe\": rpc error: code = NotFound desc = could not find container \"b81d5b22c74eb9bd79200deee7843bc6eb58d22fb47c9ba1cd7490936f6bcbbe\": container with ID starting with b81d5b22c74eb9bd79200deee7843bc6eb58d22fb47c9ba1cd7490936f6bcbbe not found: ID does not exist" Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.422755 4820 scope.go:117] "RemoveContainer" containerID="5bfe5555076e82544aa95439abd87c5310991c74bf88c2cbacdde9bd8ab43339" Feb 03 12:53:09 crc kubenswrapper[4820]: E0203 12:53:09.423495 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bfe5555076e82544aa95439abd87c5310991c74bf88c2cbacdde9bd8ab43339\": container with ID starting with 5bfe5555076e82544aa95439abd87c5310991c74bf88c2cbacdde9bd8ab43339 not found: ID does not exist" containerID="5bfe5555076e82544aa95439abd87c5310991c74bf88c2cbacdde9bd8ab43339" Feb 03 12:53:09 crc kubenswrapper[4820]: I0203 12:53:09.423517 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfe5555076e82544aa95439abd87c5310991c74bf88c2cbacdde9bd8ab43339"} err="failed to get container status \"5bfe5555076e82544aa95439abd87c5310991c74bf88c2cbacdde9bd8ab43339\": rpc error: code = NotFound desc = could not find container \"5bfe5555076e82544aa95439abd87c5310991c74bf88c2cbacdde9bd8ab43339\": container with ID starting with 5bfe5555076e82544aa95439abd87c5310991c74bf88c2cbacdde9bd8ab43339 not found: ID does not exist" Feb 03 12:53:11 crc kubenswrapper[4820]: I0203 12:53:11.160558 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f69d2ce-a2c0-43df-a770-70e531c2b1e2" path="/var/lib/kubelet/pods/1f69d2ce-a2c0-43df-a770-70e531c2b1e2/volumes" Feb 03 12:53:31 crc kubenswrapper[4820]: I0203 12:53:31.365790 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:53:31 crc kubenswrapper[4820]: I0203 12:53:31.366417 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:54:01 crc kubenswrapper[4820]: I0203 12:54:01.366200 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:54:01 crc kubenswrapper[4820]: I0203 12:54:01.367857 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.536063 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jwgdl"] Feb 03 12:54:09 crc kubenswrapper[4820]: E0203 12:54:09.537103 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f69d2ce-a2c0-43df-a770-70e531c2b1e2" containerName="extract-content" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.537137 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f69d2ce-a2c0-43df-a770-70e531c2b1e2" containerName="extract-content" Feb 03 12:54:09 crc kubenswrapper[4820]: E0203 12:54:09.537154 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f69d2ce-a2c0-43df-a770-70e531c2b1e2" containerName="registry-server" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.537161 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f69d2ce-a2c0-43df-a770-70e531c2b1e2" containerName="registry-server" Feb 03 12:54:09 crc kubenswrapper[4820]: E0203 12:54:09.537192 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f69d2ce-a2c0-43df-a770-70e531c2b1e2" containerName="extract-utilities" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.537198 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f69d2ce-a2c0-43df-a770-70e531c2b1e2" containerName="extract-utilities" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.537449 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f69d2ce-a2c0-43df-a770-70e531c2b1e2" containerName="registry-server" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.539097 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.559185 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jwgdl"] Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.718580 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzl4q\" (UniqueName: \"kubernetes.io/projected/1e0af94b-c293-4db2-bc26-b61adf3ac081-kube-api-access-xzl4q\") pod \"community-operators-jwgdl\" (UID: \"1e0af94b-c293-4db2-bc26-b61adf3ac081\") " pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.718713 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0af94b-c293-4db2-bc26-b61adf3ac081-catalog-content\") pod \"community-operators-jwgdl\" (UID: \"1e0af94b-c293-4db2-bc26-b61adf3ac081\") " pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.718798 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0af94b-c293-4db2-bc26-b61adf3ac081-utilities\") pod \"community-operators-jwgdl\" (UID: \"1e0af94b-c293-4db2-bc26-b61adf3ac081\") " pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.820338 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0af94b-c293-4db2-bc26-b61adf3ac081-utilities\") pod \"community-operators-jwgdl\" (UID: \"1e0af94b-c293-4db2-bc26-b61adf3ac081\") " pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.820512 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzl4q\" (UniqueName: \"kubernetes.io/projected/1e0af94b-c293-4db2-bc26-b61adf3ac081-kube-api-access-xzl4q\") pod \"community-operators-jwgdl\" (UID: \"1e0af94b-c293-4db2-bc26-b61adf3ac081\") " pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.820551 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0af94b-c293-4db2-bc26-b61adf3ac081-catalog-content\") pod \"community-operators-jwgdl\" (UID: \"1e0af94b-c293-4db2-bc26-b61adf3ac081\") " pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.821044 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0af94b-c293-4db2-bc26-b61adf3ac081-utilities\") pod \"community-operators-jwgdl\" (UID: \"1e0af94b-c293-4db2-bc26-b61adf3ac081\") " pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.821300 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0af94b-c293-4db2-bc26-b61adf3ac081-catalog-content\") pod \"community-operators-jwgdl\" (UID: \"1e0af94b-c293-4db2-bc26-b61adf3ac081\") " pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.841622 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzl4q\" (UniqueName: \"kubernetes.io/projected/1e0af94b-c293-4db2-bc26-b61adf3ac081-kube-api-access-xzl4q\") pod \"community-operators-jwgdl\" (UID: \"1e0af94b-c293-4db2-bc26-b61adf3ac081\") " pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:09 crc kubenswrapper[4820]: I0203 12:54:09.866684 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:10 crc kubenswrapper[4820]: I0203 12:54:10.490999 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jwgdl"] Feb 03 12:54:11 crc kubenswrapper[4820]: I0203 12:54:11.125373 4820 generic.go:334] "Generic (PLEG): container finished" podID="1e0af94b-c293-4db2-bc26-b61adf3ac081" containerID="458479a72455ef71ed4b96e9668c4f7dc070f1d80a1eb33056e8aa1533a761ba" exitCode=0 Feb 03 12:54:11 crc kubenswrapper[4820]: I0203 12:54:11.125564 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwgdl" event={"ID":"1e0af94b-c293-4db2-bc26-b61adf3ac081","Type":"ContainerDied","Data":"458479a72455ef71ed4b96e9668c4f7dc070f1d80a1eb33056e8aa1533a761ba"} Feb 03 12:54:11 crc kubenswrapper[4820]: I0203 12:54:11.125746 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwgdl" event={"ID":"1e0af94b-c293-4db2-bc26-b61adf3ac081","Type":"ContainerStarted","Data":"efd1f54743ed1018c69e448099bb759542b6eb3edf74cebc3959977aafc638af"} Feb 03 12:54:12 crc kubenswrapper[4820]: I0203 12:54:12.135431 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwgdl" event={"ID":"1e0af94b-c293-4db2-bc26-b61adf3ac081","Type":"ContainerStarted","Data":"b97d2fce206447a5c177050944fa5ce3856c5fbf6401423c487495f62da5a390"} Feb 03 12:54:13 crc kubenswrapper[4820]: I0203 12:54:13.303625 4820 generic.go:334] "Generic (PLEG): container finished" podID="1e0af94b-c293-4db2-bc26-b61adf3ac081" containerID="b97d2fce206447a5c177050944fa5ce3856c5fbf6401423c487495f62da5a390" exitCode=0 Feb 03 12:54:13 crc kubenswrapper[4820]: I0203 12:54:13.303791 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwgdl" event={"ID":"1e0af94b-c293-4db2-bc26-b61adf3ac081","Type":"ContainerDied","Data":"b97d2fce206447a5c177050944fa5ce3856c5fbf6401423c487495f62da5a390"} Feb 03 12:54:14 crc kubenswrapper[4820]: I0203 12:54:14.316409 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwgdl" event={"ID":"1e0af94b-c293-4db2-bc26-b61adf3ac081","Type":"ContainerStarted","Data":"db05e6f7f436c7bb6a84a08c103e4cfb15ccd4651507d71c1ee4dedefda66712"} Feb 03 12:54:14 crc kubenswrapper[4820]: I0203 12:54:14.350873 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jwgdl" podStartSLOduration=2.783671094 podStartE2EDuration="5.350844902s" podCreationTimestamp="2026-02-03 12:54:09 +0000 UTC" firstStartedPulling="2026-02-03 12:54:11.127689973 +0000 UTC m=+2968.650765837" lastFinishedPulling="2026-02-03 12:54:13.694863781 +0000 UTC m=+2971.217939645" observedRunningTime="2026-02-03 12:54:14.337404259 +0000 UTC m=+2971.860480133" watchObservedRunningTime="2026-02-03 12:54:14.350844902 +0000 UTC m=+2971.873920786" Feb 03 12:54:19 crc kubenswrapper[4820]: I0203 12:54:19.866982 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:19 crc kubenswrapper[4820]: I0203 12:54:19.867633 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:19 crc kubenswrapper[4820]: I0203 12:54:19.918638 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:20 crc kubenswrapper[4820]: I0203 12:54:20.625911 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:20 crc kubenswrapper[4820]: I0203 12:54:20.682724 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jwgdl"] Feb 03 12:54:22 crc kubenswrapper[4820]: I0203 12:54:22.651934 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jwgdl" podUID="1e0af94b-c293-4db2-bc26-b61adf3ac081" containerName="registry-server" containerID="cri-o://db05e6f7f436c7bb6a84a08c103e4cfb15ccd4651507d71c1ee4dedefda66712" gracePeriod=2 Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.619814 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.669127 4820 generic.go:334] "Generic (PLEG): container finished" podID="1e0af94b-c293-4db2-bc26-b61adf3ac081" containerID="db05e6f7f436c7bb6a84a08c103e4cfb15ccd4651507d71c1ee4dedefda66712" exitCode=0 Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.669180 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwgdl" event={"ID":"1e0af94b-c293-4db2-bc26-b61adf3ac081","Type":"ContainerDied","Data":"db05e6f7f436c7bb6a84a08c103e4cfb15ccd4651507d71c1ee4dedefda66712"} Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.669217 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwgdl" event={"ID":"1e0af94b-c293-4db2-bc26-b61adf3ac081","Type":"ContainerDied","Data":"efd1f54743ed1018c69e448099bb759542b6eb3edf74cebc3959977aafc638af"} Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.669240 4820 scope.go:117] "RemoveContainer" containerID="db05e6f7f436c7bb6a84a08c103e4cfb15ccd4651507d71c1ee4dedefda66712" Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.669338 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwgdl" Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.695484 4820 scope.go:117] "RemoveContainer" containerID="b97d2fce206447a5c177050944fa5ce3856c5fbf6401423c487495f62da5a390" Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.718439 4820 scope.go:117] "RemoveContainer" containerID="458479a72455ef71ed4b96e9668c4f7dc070f1d80a1eb33056e8aa1533a761ba" Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.766474 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzl4q\" (UniqueName: \"kubernetes.io/projected/1e0af94b-c293-4db2-bc26-b61adf3ac081-kube-api-access-xzl4q\") pod \"1e0af94b-c293-4db2-bc26-b61adf3ac081\" (UID: \"1e0af94b-c293-4db2-bc26-b61adf3ac081\") " Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.766548 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0af94b-c293-4db2-bc26-b61adf3ac081-catalog-content\") pod \"1e0af94b-c293-4db2-bc26-b61adf3ac081\" (UID: \"1e0af94b-c293-4db2-bc26-b61adf3ac081\") " Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.766836 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0af94b-c293-4db2-bc26-b61adf3ac081-utilities\") pod \"1e0af94b-c293-4db2-bc26-b61adf3ac081\" (UID: \"1e0af94b-c293-4db2-bc26-b61adf3ac081\") " Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.768915 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e0af94b-c293-4db2-bc26-b61adf3ac081-utilities" (OuterVolumeSpecName: "utilities") pod "1e0af94b-c293-4db2-bc26-b61adf3ac081" (UID: "1e0af94b-c293-4db2-bc26-b61adf3ac081"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.930601 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0af94b-c293-4db2-bc26-b61adf3ac081-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.931613 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e0af94b-c293-4db2-bc26-b61adf3ac081-kube-api-access-xzl4q" (OuterVolumeSpecName: "kube-api-access-xzl4q") pod "1e0af94b-c293-4db2-bc26-b61adf3ac081" (UID: "1e0af94b-c293-4db2-bc26-b61adf3ac081"). InnerVolumeSpecName "kube-api-access-xzl4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.984284 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e0af94b-c293-4db2-bc26-b61adf3ac081-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e0af94b-c293-4db2-bc26-b61adf3ac081" (UID: "1e0af94b-c293-4db2-bc26-b61adf3ac081"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.993230 4820 scope.go:117] "RemoveContainer" containerID="db05e6f7f436c7bb6a84a08c103e4cfb15ccd4651507d71c1ee4dedefda66712" Feb 03 12:54:23 crc kubenswrapper[4820]: E0203 12:54:23.993950 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db05e6f7f436c7bb6a84a08c103e4cfb15ccd4651507d71c1ee4dedefda66712\": container with ID starting with db05e6f7f436c7bb6a84a08c103e4cfb15ccd4651507d71c1ee4dedefda66712 not found: ID does not exist" containerID="db05e6f7f436c7bb6a84a08c103e4cfb15ccd4651507d71c1ee4dedefda66712" Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.994025 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db05e6f7f436c7bb6a84a08c103e4cfb15ccd4651507d71c1ee4dedefda66712"} err="failed to get container status \"db05e6f7f436c7bb6a84a08c103e4cfb15ccd4651507d71c1ee4dedefda66712\": rpc error: code = NotFound desc = could not find container \"db05e6f7f436c7bb6a84a08c103e4cfb15ccd4651507d71c1ee4dedefda66712\": container with ID starting with db05e6f7f436c7bb6a84a08c103e4cfb15ccd4651507d71c1ee4dedefda66712 not found: ID does not exist" Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.994068 4820 scope.go:117] "RemoveContainer" containerID="b97d2fce206447a5c177050944fa5ce3856c5fbf6401423c487495f62da5a390" Feb 03 12:54:23 crc kubenswrapper[4820]: E0203 12:54:23.994636 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b97d2fce206447a5c177050944fa5ce3856c5fbf6401423c487495f62da5a390\": container with ID starting with b97d2fce206447a5c177050944fa5ce3856c5fbf6401423c487495f62da5a390 not found: ID does not exist" containerID="b97d2fce206447a5c177050944fa5ce3856c5fbf6401423c487495f62da5a390" Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.994699 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b97d2fce206447a5c177050944fa5ce3856c5fbf6401423c487495f62da5a390"} err="failed to get container status \"b97d2fce206447a5c177050944fa5ce3856c5fbf6401423c487495f62da5a390\": rpc error: code = NotFound desc = could not find container \"b97d2fce206447a5c177050944fa5ce3856c5fbf6401423c487495f62da5a390\": container with ID starting with b97d2fce206447a5c177050944fa5ce3856c5fbf6401423c487495f62da5a390 not found: ID does not exist" Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.994725 4820 scope.go:117] "RemoveContainer" containerID="458479a72455ef71ed4b96e9668c4f7dc070f1d80a1eb33056e8aa1533a761ba" Feb 03 12:54:23 crc kubenswrapper[4820]: E0203 12:54:23.995601 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"458479a72455ef71ed4b96e9668c4f7dc070f1d80a1eb33056e8aa1533a761ba\": container with ID starting with 458479a72455ef71ed4b96e9668c4f7dc070f1d80a1eb33056e8aa1533a761ba not found: ID does not exist" containerID="458479a72455ef71ed4b96e9668c4f7dc070f1d80a1eb33056e8aa1533a761ba" Feb 03 12:54:23 crc kubenswrapper[4820]: I0203 12:54:23.995649 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458479a72455ef71ed4b96e9668c4f7dc070f1d80a1eb33056e8aa1533a761ba"} err="failed to get container status \"458479a72455ef71ed4b96e9668c4f7dc070f1d80a1eb33056e8aa1533a761ba\": rpc error: code = NotFound desc = could not find container \"458479a72455ef71ed4b96e9668c4f7dc070f1d80a1eb33056e8aa1533a761ba\": container with ID starting with 458479a72455ef71ed4b96e9668c4f7dc070f1d80a1eb33056e8aa1533a761ba not found: ID does not exist" Feb 03 12:54:24 crc kubenswrapper[4820]: I0203 12:54:24.033719 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzl4q\" (UniqueName: \"kubernetes.io/projected/1e0af94b-c293-4db2-bc26-b61adf3ac081-kube-api-access-xzl4q\") on node \"crc\" DevicePath \"\"" Feb 03 12:54:24 crc kubenswrapper[4820]: I0203 12:54:24.033753 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0af94b-c293-4db2-bc26-b61adf3ac081-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:54:24 crc kubenswrapper[4820]: I0203 12:54:24.307469 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jwgdl"] Feb 03 12:54:24 crc kubenswrapper[4820]: I0203 12:54:24.317009 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jwgdl"] Feb 03 12:54:25 crc kubenswrapper[4820]: I0203 12:54:25.165850 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e0af94b-c293-4db2-bc26-b61adf3ac081" path="/var/lib/kubelet/pods/1e0af94b-c293-4db2-bc26-b61adf3ac081/volumes" Feb 03 12:54:31 crc kubenswrapper[4820]: I0203 12:54:31.366021 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:54:31 crc kubenswrapper[4820]: I0203 12:54:31.366336 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:54:31 crc kubenswrapper[4820]: I0203 12:54:31.366398 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 12:54:31 crc kubenswrapper[4820]: I0203 12:54:31.367378 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:54:31 crc kubenswrapper[4820]: I0203 12:54:31.367446 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" gracePeriod=600 Feb 03 12:54:31 crc kubenswrapper[4820]: E0203 12:54:31.494863 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:54:31 crc kubenswrapper[4820]: I0203 12:54:31.985597 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" exitCode=0 Feb 03 12:54:31 crc kubenswrapper[4820]: I0203 12:54:31.985682 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38"} Feb 03 12:54:31 crc kubenswrapper[4820]: I0203 12:54:31.985751 4820 scope.go:117] "RemoveContainer" containerID="f3746dd5b1ad5044813133335761f15989349419ea5f7ec425a2bd58c8519583" Feb 03 12:54:31 crc kubenswrapper[4820]: I0203 12:54:31.986493 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:54:31 crc kubenswrapper[4820]: E0203 12:54:31.987011 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:54:33 crc kubenswrapper[4820]: I0203 12:54:33.183592 4820 generic.go:334] "Generic (PLEG): container finished" podID="772be0ab-717e-4a25-a481-95a4b1cd0c07" containerID="9f32ee95d3ca2e074c09fef455a9cd175f434e0abd870d026ce76e42326a2054" exitCode=0 Feb 03 12:54:33 crc kubenswrapper[4820]: I0203 12:54:33.183660 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" event={"ID":"772be0ab-717e-4a25-a481-95a4b1cd0c07","Type":"ContainerDied","Data":"9f32ee95d3ca2e074c09fef455a9cd175f434e0abd870d026ce76e42326a2054"} Feb 03 12:54:34 crc kubenswrapper[4820]: I0203 12:54:34.863388 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.023935 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67rnl\" (UniqueName: \"kubernetes.io/projected/772be0ab-717e-4a25-a481-95a4b1cd0c07-kube-api-access-67rnl\") pod \"772be0ab-717e-4a25-a481-95a4b1cd0c07\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.024108 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-ssh-key-openstack-edpm-ipam\") pod \"772be0ab-717e-4a25-a481-95a4b1cd0c07\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.024210 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-libvirt-secret-0\") pod \"772be0ab-717e-4a25-a481-95a4b1cd0c07\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.024294 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-inventory\") pod \"772be0ab-717e-4a25-a481-95a4b1cd0c07\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.024341 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-libvirt-combined-ca-bundle\") pod \"772be0ab-717e-4a25-a481-95a4b1cd0c07\" (UID: \"772be0ab-717e-4a25-a481-95a4b1cd0c07\") " Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.031161 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/772be0ab-717e-4a25-a481-95a4b1cd0c07-kube-api-access-67rnl" (OuterVolumeSpecName: "kube-api-access-67rnl") pod "772be0ab-717e-4a25-a481-95a4b1cd0c07" (UID: "772be0ab-717e-4a25-a481-95a4b1cd0c07"). InnerVolumeSpecName "kube-api-access-67rnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.035613 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "772be0ab-717e-4a25-a481-95a4b1cd0c07" (UID: "772be0ab-717e-4a25-a481-95a4b1cd0c07"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.057255 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "772be0ab-717e-4a25-a481-95a4b1cd0c07" (UID: "772be0ab-717e-4a25-a481-95a4b1cd0c07"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.057600 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "772be0ab-717e-4a25-a481-95a4b1cd0c07" (UID: "772be0ab-717e-4a25-a481-95a4b1cd0c07"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.059738 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-inventory" (OuterVolumeSpecName: "inventory") pod "772be0ab-717e-4a25-a481-95a4b1cd0c07" (UID: "772be0ab-717e-4a25-a481-95a4b1cd0c07"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.126792 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.126828 4820 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.126838 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.126848 4820 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772be0ab-717e-4a25-a481-95a4b1cd0c07-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.126857 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67rnl\" (UniqueName: \"kubernetes.io/projected/772be0ab-717e-4a25-a481-95a4b1cd0c07-kube-api-access-67rnl\") on node \"crc\" DevicePath \"\"" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.205373 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" event={"ID":"772be0ab-717e-4a25-a481-95a4b1cd0c07","Type":"ContainerDied","Data":"f4591158c246ad3eb3e7cd1c2bd1d7b8a1249491c21c7f0e1926c675443b6cdb"} Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.205422 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4591158c246ad3eb3e7cd1c2bd1d7b8a1249491c21c7f0e1926c675443b6cdb" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.205435 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.336608 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl"] Feb 03 12:54:35 crc kubenswrapper[4820]: E0203 12:54:35.337390 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0af94b-c293-4db2-bc26-b61adf3ac081" containerName="extract-content" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.337467 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0af94b-c293-4db2-bc26-b61adf3ac081" containerName="extract-content" Feb 03 12:54:35 crc kubenswrapper[4820]: E0203 12:54:35.337493 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0af94b-c293-4db2-bc26-b61adf3ac081" containerName="extract-utilities" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.337505 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0af94b-c293-4db2-bc26-b61adf3ac081" containerName="extract-utilities" Feb 03 12:54:35 crc kubenswrapper[4820]: E0203 12:54:35.337520 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772be0ab-717e-4a25-a481-95a4b1cd0c07" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.337531 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="772be0ab-717e-4a25-a481-95a4b1cd0c07" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 12:54:35 crc kubenswrapper[4820]: E0203 12:54:35.337561 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0af94b-c293-4db2-bc26-b61adf3ac081" containerName="registry-server" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.337568 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0af94b-c293-4db2-bc26-b61adf3ac081" containerName="registry-server" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.337961 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e0af94b-c293-4db2-bc26-b61adf3ac081" containerName="registry-server" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.338001 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="772be0ab-717e-4a25-a481-95a4b1cd0c07" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.339130 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.346228 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.346468 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.346946 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.347748 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.347908 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.348063 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.349292 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.357005 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl"] Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.534843 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddw6m\" (UniqueName: \"kubernetes.io/projected/b390260e-6a1b-4020-95d5-c4275e4a6c4e-kube-api-access-ddw6m\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.534927 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.534952 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.534985 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.535032 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.535059 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.535241 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.535372 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.535435 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.639387 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.639691 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.639748 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.639793 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.639841 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.639995 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddw6m\" (UniqueName: \"kubernetes.io/projected/b390260e-6a1b-4020-95d5-c4275e4a6c4e-kube-api-access-ddw6m\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.640062 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.640087 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.640126 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.871782 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.871822 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.877757 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.878809 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.893623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.893772 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.893958 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.893968 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.898316 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddw6m\" (UniqueName: \"kubernetes.io/projected/b390260e-6a1b-4020-95d5-c4275e4a6c4e-kube-api-access-ddw6m\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6pwfl\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:35 crc kubenswrapper[4820]: I0203 12:54:35.976478 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:54:36 crc kubenswrapper[4820]: I0203 12:54:36.537700 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl"] Feb 03 12:54:37 crc kubenswrapper[4820]: I0203 12:54:37.224911 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" event={"ID":"b390260e-6a1b-4020-95d5-c4275e4a6c4e","Type":"ContainerStarted","Data":"260c98521324df8638a8c9bb81890b705298bbc3ff9bee21fca206a22ed79d7c"} Feb 03 12:54:38 crc kubenswrapper[4820]: I0203 12:54:38.238737 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" event={"ID":"b390260e-6a1b-4020-95d5-c4275e4a6c4e","Type":"ContainerStarted","Data":"e295e84265cfe6e6b5a3b46fefadae1bb994709fac851fa2f7443e42cd387d1e"} Feb 03 12:54:38 crc kubenswrapper[4820]: I0203 12:54:38.263190 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" podStartSLOduration=2.8305096499999998 podStartE2EDuration="3.263167495s" podCreationTimestamp="2026-02-03 12:54:35 +0000 UTC" firstStartedPulling="2026-02-03 12:54:36.544198695 +0000 UTC m=+2994.067274559" lastFinishedPulling="2026-02-03 12:54:36.97685654 +0000 UTC m=+2994.499932404" observedRunningTime="2026-02-03 12:54:38.257276108 +0000 UTC m=+2995.780351982" watchObservedRunningTime="2026-02-03 12:54:38.263167495 +0000 UTC m=+2995.786243349" Feb 03 12:54:47 crc kubenswrapper[4820]: I0203 12:54:47.143111 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:54:47 crc kubenswrapper[4820]: E0203 12:54:47.143864 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:54:58 crc kubenswrapper[4820]: I0203 12:54:58.143066 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:54:58 crc kubenswrapper[4820]: E0203 12:54:58.143758 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:55:08 crc kubenswrapper[4820]: I0203 12:55:08.080032 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fxd2k"] Feb 03 12:55:08 crc kubenswrapper[4820]: I0203 12:55:08.084474 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:08 crc kubenswrapper[4820]: I0203 12:55:08.111009 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxd2k"] Feb 03 12:55:08 crc kubenswrapper[4820]: I0203 12:55:08.199963 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/790d5bf7-4d53-48d4-b000-501bc69247cc-utilities\") pod \"certified-operators-fxd2k\" (UID: \"790d5bf7-4d53-48d4-b000-501bc69247cc\") " pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:08 crc kubenswrapper[4820]: I0203 12:55:08.200064 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/790d5bf7-4d53-48d4-b000-501bc69247cc-catalog-content\") pod \"certified-operators-fxd2k\" (UID: \"790d5bf7-4d53-48d4-b000-501bc69247cc\") " pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:08 crc kubenswrapper[4820]: I0203 12:55:08.200230 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2jxp\" (UniqueName: \"kubernetes.io/projected/790d5bf7-4d53-48d4-b000-501bc69247cc-kube-api-access-m2jxp\") pod \"certified-operators-fxd2k\" (UID: \"790d5bf7-4d53-48d4-b000-501bc69247cc\") " pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:08 crc kubenswrapper[4820]: I0203 12:55:08.302810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2jxp\" (UniqueName: \"kubernetes.io/projected/790d5bf7-4d53-48d4-b000-501bc69247cc-kube-api-access-m2jxp\") pod \"certified-operators-fxd2k\" (UID: \"790d5bf7-4d53-48d4-b000-501bc69247cc\") " pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:08 crc kubenswrapper[4820]: I0203 12:55:08.302905 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/790d5bf7-4d53-48d4-b000-501bc69247cc-utilities\") pod \"certified-operators-fxd2k\" (UID: \"790d5bf7-4d53-48d4-b000-501bc69247cc\") " pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:08 crc kubenswrapper[4820]: I0203 12:55:08.302982 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/790d5bf7-4d53-48d4-b000-501bc69247cc-catalog-content\") pod \"certified-operators-fxd2k\" (UID: \"790d5bf7-4d53-48d4-b000-501bc69247cc\") " pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:08 crc kubenswrapper[4820]: I0203 12:55:08.303457 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/790d5bf7-4d53-48d4-b000-501bc69247cc-utilities\") pod \"certified-operators-fxd2k\" (UID: \"790d5bf7-4d53-48d4-b000-501bc69247cc\") " pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:08 crc kubenswrapper[4820]: I0203 12:55:08.303574 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/790d5bf7-4d53-48d4-b000-501bc69247cc-catalog-content\") pod \"certified-operators-fxd2k\" (UID: \"790d5bf7-4d53-48d4-b000-501bc69247cc\") " pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:08 crc kubenswrapper[4820]: I0203 12:55:08.330069 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2jxp\" (UniqueName: \"kubernetes.io/projected/790d5bf7-4d53-48d4-b000-501bc69247cc-kube-api-access-m2jxp\") pod \"certified-operators-fxd2k\" (UID: \"790d5bf7-4d53-48d4-b000-501bc69247cc\") " pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:08 crc kubenswrapper[4820]: I0203 12:55:08.425870 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:09 crc kubenswrapper[4820]: I0203 12:55:09.893429 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxd2k"] Feb 03 12:55:10 crc kubenswrapper[4820]: I0203 12:55:10.143548 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:55:10 crc kubenswrapper[4820]: E0203 12:55:10.143852 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:55:10 crc kubenswrapper[4820]: I0203 12:55:10.156528 4820 generic.go:334] "Generic (PLEG): container finished" podID="790d5bf7-4d53-48d4-b000-501bc69247cc" containerID="2b11a3a57456ebda29cae693680103a2840cb1eb7be3271cf01704127b6fe009" exitCode=0 Feb 03 12:55:10 crc kubenswrapper[4820]: I0203 12:55:10.156574 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxd2k" event={"ID":"790d5bf7-4d53-48d4-b000-501bc69247cc","Type":"ContainerDied","Data":"2b11a3a57456ebda29cae693680103a2840cb1eb7be3271cf01704127b6fe009"} Feb 03 12:55:10 crc kubenswrapper[4820]: I0203 12:55:10.156600 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxd2k" event={"ID":"790d5bf7-4d53-48d4-b000-501bc69247cc","Type":"ContainerStarted","Data":"0fafcd647d85a3c9035edcb55de25f19ae657ac1702ce46260b723d07e932dba"} Feb 03 12:55:12 crc kubenswrapper[4820]: I0203 12:55:12.471921 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxd2k" event={"ID":"790d5bf7-4d53-48d4-b000-501bc69247cc","Type":"ContainerStarted","Data":"7783775a5befcc999ab4c643207b79560df183f46af6e2ac714bee38ade35b2e"} Feb 03 12:55:13 crc kubenswrapper[4820]: I0203 12:55:13.682678 4820 generic.go:334] "Generic (PLEG): container finished" podID="790d5bf7-4d53-48d4-b000-501bc69247cc" containerID="7783775a5befcc999ab4c643207b79560df183f46af6e2ac714bee38ade35b2e" exitCode=0 Feb 03 12:55:13 crc kubenswrapper[4820]: I0203 12:55:13.683014 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxd2k" event={"ID":"790d5bf7-4d53-48d4-b000-501bc69247cc","Type":"ContainerDied","Data":"7783775a5befcc999ab4c643207b79560df183f46af6e2ac714bee38ade35b2e"} Feb 03 12:55:14 crc kubenswrapper[4820]: I0203 12:55:14.697492 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxd2k" event={"ID":"790d5bf7-4d53-48d4-b000-501bc69247cc","Type":"ContainerStarted","Data":"cd27e830a63d33d784777c039bd9719bd6906a97fc7cc573f3ecda4625fba2c1"} Feb 03 12:55:14 crc kubenswrapper[4820]: I0203 12:55:14.727858 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fxd2k" podStartSLOduration=2.754826954 podStartE2EDuration="6.727834274s" podCreationTimestamp="2026-02-03 12:55:08 +0000 UTC" firstStartedPulling="2026-02-03 12:55:10.157933927 +0000 UTC m=+3027.681009791" lastFinishedPulling="2026-02-03 12:55:14.130941247 +0000 UTC m=+3031.654017111" observedRunningTime="2026-02-03 12:55:14.725740249 +0000 UTC m=+3032.248816113" watchObservedRunningTime="2026-02-03 12:55:14.727834274 +0000 UTC m=+3032.250910138" Feb 03 12:55:18 crc kubenswrapper[4820]: I0203 12:55:18.426326 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:18 crc kubenswrapper[4820]: I0203 12:55:18.426973 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:18 crc kubenswrapper[4820]: I0203 12:55:18.660301 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:22 crc kubenswrapper[4820]: I0203 12:55:22.143361 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:55:22 crc kubenswrapper[4820]: E0203 12:55:22.144009 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:55:27 crc kubenswrapper[4820]: E0203 12:55:27.055882 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Feb 03 12:55:28 crc kubenswrapper[4820]: I0203 12:55:28.506064 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:28 crc kubenswrapper[4820]: I0203 12:55:28.695411 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxd2k"] Feb 03 12:55:28 crc kubenswrapper[4820]: I0203 12:55:28.843246 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fxd2k" podUID="790d5bf7-4d53-48d4-b000-501bc69247cc" containerName="registry-server" containerID="cri-o://cd27e830a63d33d784777c039bd9719bd6906a97fc7cc573f3ecda4625fba2c1" gracePeriod=2 Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.310012 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.489537 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/790d5bf7-4d53-48d4-b000-501bc69247cc-utilities\") pod \"790d5bf7-4d53-48d4-b000-501bc69247cc\" (UID: \"790d5bf7-4d53-48d4-b000-501bc69247cc\") " Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.489767 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2jxp\" (UniqueName: \"kubernetes.io/projected/790d5bf7-4d53-48d4-b000-501bc69247cc-kube-api-access-m2jxp\") pod \"790d5bf7-4d53-48d4-b000-501bc69247cc\" (UID: \"790d5bf7-4d53-48d4-b000-501bc69247cc\") " Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.489820 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/790d5bf7-4d53-48d4-b000-501bc69247cc-catalog-content\") pod \"790d5bf7-4d53-48d4-b000-501bc69247cc\" (UID: \"790d5bf7-4d53-48d4-b000-501bc69247cc\") " Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.490657 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/790d5bf7-4d53-48d4-b000-501bc69247cc-utilities" (OuterVolumeSpecName: "utilities") pod "790d5bf7-4d53-48d4-b000-501bc69247cc" (UID: "790d5bf7-4d53-48d4-b000-501bc69247cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.497100 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790d5bf7-4d53-48d4-b000-501bc69247cc-kube-api-access-m2jxp" (OuterVolumeSpecName: "kube-api-access-m2jxp") pod "790d5bf7-4d53-48d4-b000-501bc69247cc" (UID: "790d5bf7-4d53-48d4-b000-501bc69247cc"). InnerVolumeSpecName "kube-api-access-m2jxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.538613 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/790d5bf7-4d53-48d4-b000-501bc69247cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "790d5bf7-4d53-48d4-b000-501bc69247cc" (UID: "790d5bf7-4d53-48d4-b000-501bc69247cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.591725 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/790d5bf7-4d53-48d4-b000-501bc69247cc-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.591761 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2jxp\" (UniqueName: \"kubernetes.io/projected/790d5bf7-4d53-48d4-b000-501bc69247cc-kube-api-access-m2jxp\") on node \"crc\" DevicePath \"\"" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.591772 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/790d5bf7-4d53-48d4-b000-501bc69247cc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.855820 4820 generic.go:334] "Generic (PLEG): container finished" podID="790d5bf7-4d53-48d4-b000-501bc69247cc" containerID="cd27e830a63d33d784777c039bd9719bd6906a97fc7cc573f3ecda4625fba2c1" exitCode=0 Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.855905 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxd2k" event={"ID":"790d5bf7-4d53-48d4-b000-501bc69247cc","Type":"ContainerDied","Data":"cd27e830a63d33d784777c039bd9719bd6906a97fc7cc573f3ecda4625fba2c1"} Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.855924 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxd2k" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.855967 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxd2k" event={"ID":"790d5bf7-4d53-48d4-b000-501bc69247cc","Type":"ContainerDied","Data":"0fafcd647d85a3c9035edcb55de25f19ae657ac1702ce46260b723d07e932dba"} Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.856015 4820 scope.go:117] "RemoveContainer" containerID="cd27e830a63d33d784777c039bd9719bd6906a97fc7cc573f3ecda4625fba2c1" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.889080 4820 scope.go:117] "RemoveContainer" containerID="7783775a5befcc999ab4c643207b79560df183f46af6e2ac714bee38ade35b2e" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.897492 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxd2k"] Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.906690 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fxd2k"] Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.914962 4820 scope.go:117] "RemoveContainer" containerID="2b11a3a57456ebda29cae693680103a2840cb1eb7be3271cf01704127b6fe009" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.957543 4820 scope.go:117] "RemoveContainer" containerID="cd27e830a63d33d784777c039bd9719bd6906a97fc7cc573f3ecda4625fba2c1" Feb 03 12:55:29 crc kubenswrapper[4820]: E0203 12:55:29.958221 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd27e830a63d33d784777c039bd9719bd6906a97fc7cc573f3ecda4625fba2c1\": container with ID starting with cd27e830a63d33d784777c039bd9719bd6906a97fc7cc573f3ecda4625fba2c1 not found: ID does not exist" containerID="cd27e830a63d33d784777c039bd9719bd6906a97fc7cc573f3ecda4625fba2c1" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.958305 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd27e830a63d33d784777c039bd9719bd6906a97fc7cc573f3ecda4625fba2c1"} err="failed to get container status \"cd27e830a63d33d784777c039bd9719bd6906a97fc7cc573f3ecda4625fba2c1\": rpc error: code = NotFound desc = could not find container \"cd27e830a63d33d784777c039bd9719bd6906a97fc7cc573f3ecda4625fba2c1\": container with ID starting with cd27e830a63d33d784777c039bd9719bd6906a97fc7cc573f3ecda4625fba2c1 not found: ID does not exist" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.958351 4820 scope.go:117] "RemoveContainer" containerID="7783775a5befcc999ab4c643207b79560df183f46af6e2ac714bee38ade35b2e" Feb 03 12:55:29 crc kubenswrapper[4820]: E0203 12:55:29.958950 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7783775a5befcc999ab4c643207b79560df183f46af6e2ac714bee38ade35b2e\": container with ID starting with 7783775a5befcc999ab4c643207b79560df183f46af6e2ac714bee38ade35b2e not found: ID does not exist" containerID="7783775a5befcc999ab4c643207b79560df183f46af6e2ac714bee38ade35b2e" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.958989 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7783775a5befcc999ab4c643207b79560df183f46af6e2ac714bee38ade35b2e"} err="failed to get container status \"7783775a5befcc999ab4c643207b79560df183f46af6e2ac714bee38ade35b2e\": rpc error: code = NotFound desc = could not find container \"7783775a5befcc999ab4c643207b79560df183f46af6e2ac714bee38ade35b2e\": container with ID starting with 7783775a5befcc999ab4c643207b79560df183f46af6e2ac714bee38ade35b2e not found: ID does not exist" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.959019 4820 scope.go:117] "RemoveContainer" containerID="2b11a3a57456ebda29cae693680103a2840cb1eb7be3271cf01704127b6fe009" Feb 03 12:55:29 crc kubenswrapper[4820]: E0203 12:55:29.959674 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b11a3a57456ebda29cae693680103a2840cb1eb7be3271cf01704127b6fe009\": container with ID starting with 2b11a3a57456ebda29cae693680103a2840cb1eb7be3271cf01704127b6fe009 not found: ID does not exist" containerID="2b11a3a57456ebda29cae693680103a2840cb1eb7be3271cf01704127b6fe009" Feb 03 12:55:29 crc kubenswrapper[4820]: I0203 12:55:29.959750 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b11a3a57456ebda29cae693680103a2840cb1eb7be3271cf01704127b6fe009"} err="failed to get container status \"2b11a3a57456ebda29cae693680103a2840cb1eb7be3271cf01704127b6fe009\": rpc error: code = NotFound desc = could not find container \"2b11a3a57456ebda29cae693680103a2840cb1eb7be3271cf01704127b6fe009\": container with ID starting with 2b11a3a57456ebda29cae693680103a2840cb1eb7be3271cf01704127b6fe009 not found: ID does not exist" Feb 03 12:55:31 crc kubenswrapper[4820]: I0203 12:55:31.158097 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="790d5bf7-4d53-48d4-b000-501bc69247cc" path="/var/lib/kubelet/pods/790d5bf7-4d53-48d4-b000-501bc69247cc/volumes" Feb 03 12:55:35 crc kubenswrapper[4820]: I0203 12:55:35.143813 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:55:35 crc kubenswrapper[4820]: E0203 12:55:35.144774 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:55:48 crc kubenswrapper[4820]: I0203 12:55:48.143723 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:55:48 crc kubenswrapper[4820]: E0203 12:55:48.144946 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:56:02 crc kubenswrapper[4820]: I0203 12:56:02.143248 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:56:02 crc kubenswrapper[4820]: E0203 12:56:02.144099 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:56:16 crc kubenswrapper[4820]: I0203 12:56:16.142941 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:56:16 crc kubenswrapper[4820]: E0203 12:56:16.143870 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:56:28 crc kubenswrapper[4820]: I0203 12:56:28.142436 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:56:28 crc kubenswrapper[4820]: E0203 12:56:28.143118 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:56:43 crc kubenswrapper[4820]: I0203 12:56:43.153641 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:56:43 crc kubenswrapper[4820]: E0203 12:56:43.154448 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:56:47 crc kubenswrapper[4820]: I0203 12:56:47.938806 4820 generic.go:334] "Generic (PLEG): container finished" podID="b390260e-6a1b-4020-95d5-c4275e4a6c4e" containerID="e295e84265cfe6e6b5a3b46fefadae1bb994709fac851fa2f7443e42cd387d1e" exitCode=0 Feb 03 12:56:47 crc kubenswrapper[4820]: I0203 12:56:47.939462 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" event={"ID":"b390260e-6a1b-4020-95d5-c4275e4a6c4e","Type":"ContainerDied","Data":"e295e84265cfe6e6b5a3b46fefadae1bb994709fac851fa2f7443e42cd387d1e"} Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.401195 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.563019 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-inventory\") pod \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.563147 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddw6m\" (UniqueName: \"kubernetes.io/projected/b390260e-6a1b-4020-95d5-c4275e4a6c4e-kube-api-access-ddw6m\") pod \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.563198 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-extra-config-0\") pod \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.563218 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-ssh-key-openstack-edpm-ipam\") pod \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.563263 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-cell1-compute-config-0\") pod \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.563374 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-migration-ssh-key-0\") pod \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.563427 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-cell1-compute-config-1\") pod \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.563519 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-migration-ssh-key-1\") pod \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.563553 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-combined-ca-bundle\") pod \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\" (UID: \"b390260e-6a1b-4020-95d5-c4275e4a6c4e\") " Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.569290 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b390260e-6a1b-4020-95d5-c4275e4a6c4e" (UID: "b390260e-6a1b-4020-95d5-c4275e4a6c4e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.571609 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b390260e-6a1b-4020-95d5-c4275e4a6c4e-kube-api-access-ddw6m" (OuterVolumeSpecName: "kube-api-access-ddw6m") pod "b390260e-6a1b-4020-95d5-c4275e4a6c4e" (UID: "b390260e-6a1b-4020-95d5-c4275e4a6c4e"). InnerVolumeSpecName "kube-api-access-ddw6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.593928 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b390260e-6a1b-4020-95d5-c4275e4a6c4e" (UID: "b390260e-6a1b-4020-95d5-c4275e4a6c4e"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.601902 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b390260e-6a1b-4020-95d5-c4275e4a6c4e" (UID: "b390260e-6a1b-4020-95d5-c4275e4a6c4e"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.603239 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-inventory" (OuterVolumeSpecName: "inventory") pod "b390260e-6a1b-4020-95d5-c4275e4a6c4e" (UID: "b390260e-6a1b-4020-95d5-c4275e4a6c4e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.604693 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b390260e-6a1b-4020-95d5-c4275e4a6c4e" (UID: "b390260e-6a1b-4020-95d5-c4275e4a6c4e"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.604991 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b390260e-6a1b-4020-95d5-c4275e4a6c4e" (UID: "b390260e-6a1b-4020-95d5-c4275e4a6c4e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.606120 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b390260e-6a1b-4020-95d5-c4275e4a6c4e" (UID: "b390260e-6a1b-4020-95d5-c4275e4a6c4e"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.616153 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b390260e-6a1b-4020-95d5-c4275e4a6c4e" (UID: "b390260e-6a1b-4020-95d5-c4275e4a6c4e"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.665567 4820 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.665703 4820 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.665778 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.665851 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddw6m\" (UniqueName: \"kubernetes.io/projected/b390260e-6a1b-4020-95d5-c4275e4a6c4e-kube-api-access-ddw6m\") on node \"crc\" DevicePath \"\"" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.665938 4820 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.666028 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.666105 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.666246 4820 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.666327 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b390260e-6a1b-4020-95d5-c4275e4a6c4e-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.960613 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" event={"ID":"b390260e-6a1b-4020-95d5-c4275e4a6c4e","Type":"ContainerDied","Data":"260c98521324df8638a8c9bb81890b705298bbc3ff9bee21fca206a22ed79d7c"} Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.960654 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6pwfl" Feb 03 12:56:49 crc kubenswrapper[4820]: I0203 12:56:49.960674 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="260c98521324df8638a8c9bb81890b705298bbc3ff9bee21fca206a22ed79d7c" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.079631 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk"] Feb 03 12:56:50 crc kubenswrapper[4820]: E0203 12:56:50.080156 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790d5bf7-4d53-48d4-b000-501bc69247cc" containerName="extract-content" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.080186 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="790d5bf7-4d53-48d4-b000-501bc69247cc" containerName="extract-content" Feb 03 12:56:50 crc kubenswrapper[4820]: E0203 12:56:50.080212 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790d5bf7-4d53-48d4-b000-501bc69247cc" containerName="extract-utilities" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.080220 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="790d5bf7-4d53-48d4-b000-501bc69247cc" containerName="extract-utilities" Feb 03 12:56:50 crc kubenswrapper[4820]: E0203 12:56:50.080243 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b390260e-6a1b-4020-95d5-c4275e4a6c4e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.080250 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b390260e-6a1b-4020-95d5-c4275e4a6c4e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 12:56:50 crc kubenswrapper[4820]: E0203 12:56:50.080264 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790d5bf7-4d53-48d4-b000-501bc69247cc" containerName="registry-server" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.080270 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="790d5bf7-4d53-48d4-b000-501bc69247cc" containerName="registry-server" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.080529 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="790d5bf7-4d53-48d4-b000-501bc69247cc" containerName="registry-server" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.080561 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b390260e-6a1b-4020-95d5-c4275e4a6c4e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.081478 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.083874 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.091276 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.091760 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.092056 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q6jrk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.092336 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.097776 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk"] Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.277737 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.278406 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.278450 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.278638 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcdpw\" (UniqueName: \"kubernetes.io/projected/9dba6be1-f601-4959-8c1f-791b7fb032b8-kube-api-access-fcdpw\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.278750 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.278789 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.279119 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.381594 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcdpw\" (UniqueName: \"kubernetes.io/projected/9dba6be1-f601-4959-8c1f-791b7fb032b8-kube-api-access-fcdpw\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.381674 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.381709 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.381829 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.381865 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.381914 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.381944 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.385927 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.385972 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.386309 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.386869 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.387916 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.392746 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.410725 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcdpw\" (UniqueName: \"kubernetes.io/projected/9dba6be1-f601-4959-8c1f-791b7fb032b8-kube-api-access-fcdpw\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g98lk\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:50 crc kubenswrapper[4820]: I0203 12:56:50.700600 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:56:51 crc kubenswrapper[4820]: I0203 12:56:51.263216 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk"] Feb 03 12:56:51 crc kubenswrapper[4820]: I0203 12:56:51.980322 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" event={"ID":"9dba6be1-f601-4959-8c1f-791b7fb032b8","Type":"ContainerStarted","Data":"fa3e1a200207b52bcca7ab6cc33e69ef75613e1c665c2751d15e23b783e6d508"} Feb 03 12:56:52 crc kubenswrapper[4820]: I0203 12:56:52.992111 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" event={"ID":"9dba6be1-f601-4959-8c1f-791b7fb032b8","Type":"ContainerStarted","Data":"5de8b2716a16a5cefadca765cf0f51ea7f080a381137d9e8e0ba36d5761f7dc1"} Feb 03 12:56:53 crc kubenswrapper[4820]: I0203 12:56:53.022162 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" podStartSLOduration=2.571962619 podStartE2EDuration="3.022111061s" podCreationTimestamp="2026-02-03 12:56:50 +0000 UTC" firstStartedPulling="2026-02-03 12:56:51.263359979 +0000 UTC m=+3128.786435843" lastFinishedPulling="2026-02-03 12:56:51.713508411 +0000 UTC m=+3129.236584285" observedRunningTime="2026-02-03 12:56:53.016087621 +0000 UTC m=+3130.539163485" watchObservedRunningTime="2026-02-03 12:56:53.022111061 +0000 UTC m=+3130.545186935" Feb 03 12:56:56 crc kubenswrapper[4820]: I0203 12:56:56.142277 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:56:56 crc kubenswrapper[4820]: E0203 12:56:56.142855 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:57:10 crc kubenswrapper[4820]: I0203 12:57:10.143590 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:57:10 crc kubenswrapper[4820]: E0203 12:57:10.144637 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:57:22 crc kubenswrapper[4820]: I0203 12:57:22.143599 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:57:22 crc kubenswrapper[4820]: E0203 12:57:22.144444 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:57:34 crc kubenswrapper[4820]: I0203 12:57:34.143590 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:57:34 crc kubenswrapper[4820]: E0203 12:57:34.144324 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:57:40 crc kubenswrapper[4820]: I0203 12:57:40.813234 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pt8f5"] Feb 03 12:57:40 crc kubenswrapper[4820]: I0203 12:57:40.820224 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:40 crc kubenswrapper[4820]: I0203 12:57:40.828115 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pt8f5"] Feb 03 12:57:40 crc kubenswrapper[4820]: I0203 12:57:40.959560 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2csn\" (UniqueName: \"kubernetes.io/projected/d25f269d-89d5-442b-b62c-34c7be87fbad-kube-api-access-q2csn\") pod \"redhat-marketplace-pt8f5\" (UID: \"d25f269d-89d5-442b-b62c-34c7be87fbad\") " pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:40 crc kubenswrapper[4820]: I0203 12:57:40.959703 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d25f269d-89d5-442b-b62c-34c7be87fbad-utilities\") pod \"redhat-marketplace-pt8f5\" (UID: \"d25f269d-89d5-442b-b62c-34c7be87fbad\") " pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:40 crc kubenswrapper[4820]: I0203 12:57:40.959748 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d25f269d-89d5-442b-b62c-34c7be87fbad-catalog-content\") pod \"redhat-marketplace-pt8f5\" (UID: \"d25f269d-89d5-442b-b62c-34c7be87fbad\") " pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:41 crc kubenswrapper[4820]: I0203 12:57:41.063865 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d25f269d-89d5-442b-b62c-34c7be87fbad-utilities\") pod \"redhat-marketplace-pt8f5\" (UID: \"d25f269d-89d5-442b-b62c-34c7be87fbad\") " pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:41 crc kubenswrapper[4820]: I0203 12:57:41.063955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d25f269d-89d5-442b-b62c-34c7be87fbad-catalog-content\") pod \"redhat-marketplace-pt8f5\" (UID: \"d25f269d-89d5-442b-b62c-34c7be87fbad\") " pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:41 crc kubenswrapper[4820]: I0203 12:57:41.064416 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2csn\" (UniqueName: \"kubernetes.io/projected/d25f269d-89d5-442b-b62c-34c7be87fbad-kube-api-access-q2csn\") pod \"redhat-marketplace-pt8f5\" (UID: \"d25f269d-89d5-442b-b62c-34c7be87fbad\") " pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:41 crc kubenswrapper[4820]: I0203 12:57:41.064754 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d25f269d-89d5-442b-b62c-34c7be87fbad-utilities\") pod \"redhat-marketplace-pt8f5\" (UID: \"d25f269d-89d5-442b-b62c-34c7be87fbad\") " pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:41 crc kubenswrapper[4820]: I0203 12:57:41.066749 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d25f269d-89d5-442b-b62c-34c7be87fbad-catalog-content\") pod \"redhat-marketplace-pt8f5\" (UID: \"d25f269d-89d5-442b-b62c-34c7be87fbad\") " pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:41 crc kubenswrapper[4820]: I0203 12:57:41.091782 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2csn\" (UniqueName: \"kubernetes.io/projected/d25f269d-89d5-442b-b62c-34c7be87fbad-kube-api-access-q2csn\") pod \"redhat-marketplace-pt8f5\" (UID: \"d25f269d-89d5-442b-b62c-34c7be87fbad\") " pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:41 crc kubenswrapper[4820]: I0203 12:57:41.158785 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:41 crc kubenswrapper[4820]: I0203 12:57:41.945167 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pt8f5"] Feb 03 12:57:42 crc kubenswrapper[4820]: I0203 12:57:42.909606 4820 generic.go:334] "Generic (PLEG): container finished" podID="d25f269d-89d5-442b-b62c-34c7be87fbad" containerID="6089398c45c599173b3ff9212ad9dcf19e23400868b40b98119c6cfd08584d2c" exitCode=0 Feb 03 12:57:42 crc kubenswrapper[4820]: I0203 12:57:42.909931 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pt8f5" event={"ID":"d25f269d-89d5-442b-b62c-34c7be87fbad","Type":"ContainerDied","Data":"6089398c45c599173b3ff9212ad9dcf19e23400868b40b98119c6cfd08584d2c"} Feb 03 12:57:42 crc kubenswrapper[4820]: I0203 12:57:42.909969 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pt8f5" event={"ID":"d25f269d-89d5-442b-b62c-34c7be87fbad","Type":"ContainerStarted","Data":"59d04b68bf264f115a0cf25e4e34276bdeb1ef70eccee11a9e435105040302a0"} Feb 03 12:57:44 crc kubenswrapper[4820]: I0203 12:57:44.929503 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pt8f5" event={"ID":"d25f269d-89d5-442b-b62c-34c7be87fbad","Type":"ContainerStarted","Data":"f231bd34cc6b71e429c95382852161b201cfd8375df69d97fba24bec0db7b7d9"} Feb 03 12:57:45 crc kubenswrapper[4820]: I0203 12:57:45.942456 4820 generic.go:334] "Generic (PLEG): container finished" podID="d25f269d-89d5-442b-b62c-34c7be87fbad" containerID="f231bd34cc6b71e429c95382852161b201cfd8375df69d97fba24bec0db7b7d9" exitCode=0 Feb 03 12:57:45 crc kubenswrapper[4820]: I0203 12:57:45.942802 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pt8f5" event={"ID":"d25f269d-89d5-442b-b62c-34c7be87fbad","Type":"ContainerDied","Data":"f231bd34cc6b71e429c95382852161b201cfd8375df69d97fba24bec0db7b7d9"} Feb 03 12:57:47 crc kubenswrapper[4820]: I0203 12:57:47.145302 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:57:47 crc kubenswrapper[4820]: E0203 12:57:47.145952 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:57:47 crc kubenswrapper[4820]: I0203 12:57:47.965713 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pt8f5" event={"ID":"d25f269d-89d5-442b-b62c-34c7be87fbad","Type":"ContainerStarted","Data":"3beee9fe542b56e126a8dcf813e55f3c16b07eb561f92a12f1f3cb837bebff12"} Feb 03 12:57:47 crc kubenswrapper[4820]: I0203 12:57:47.993961 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pt8f5" podStartSLOduration=4.380719063 podStartE2EDuration="7.99385057s" podCreationTimestamp="2026-02-03 12:57:40 +0000 UTC" firstStartedPulling="2026-02-03 12:57:42.912118444 +0000 UTC m=+3180.435194308" lastFinishedPulling="2026-02-03 12:57:46.525249951 +0000 UTC m=+3184.048325815" observedRunningTime="2026-02-03 12:57:47.986017952 +0000 UTC m=+3185.509093836" watchObservedRunningTime="2026-02-03 12:57:47.99385057 +0000 UTC m=+3185.516926434" Feb 03 12:57:51 crc kubenswrapper[4820]: I0203 12:57:51.165012 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:51 crc kubenswrapper[4820]: I0203 12:57:51.165382 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:51 crc kubenswrapper[4820]: I0203 12:57:51.241656 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:52 crc kubenswrapper[4820]: I0203 12:57:52.143714 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:52 crc kubenswrapper[4820]: I0203 12:57:52.201735 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pt8f5"] Feb 03 12:57:54 crc kubenswrapper[4820]: I0203 12:57:54.113751 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pt8f5" podUID="d25f269d-89d5-442b-b62c-34c7be87fbad" containerName="registry-server" containerID="cri-o://3beee9fe542b56e126a8dcf813e55f3c16b07eb561f92a12f1f3cb837bebff12" gracePeriod=2 Feb 03 12:57:54 crc kubenswrapper[4820]: I0203 12:57:54.599005 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:54 crc kubenswrapper[4820]: I0203 12:57:54.796192 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2csn\" (UniqueName: \"kubernetes.io/projected/d25f269d-89d5-442b-b62c-34c7be87fbad-kube-api-access-q2csn\") pod \"d25f269d-89d5-442b-b62c-34c7be87fbad\" (UID: \"d25f269d-89d5-442b-b62c-34c7be87fbad\") " Feb 03 12:57:54 crc kubenswrapper[4820]: I0203 12:57:54.796318 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d25f269d-89d5-442b-b62c-34c7be87fbad-utilities\") pod \"d25f269d-89d5-442b-b62c-34c7be87fbad\" (UID: \"d25f269d-89d5-442b-b62c-34c7be87fbad\") " Feb 03 12:57:54 crc kubenswrapper[4820]: I0203 12:57:54.796427 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d25f269d-89d5-442b-b62c-34c7be87fbad-catalog-content\") pod \"d25f269d-89d5-442b-b62c-34c7be87fbad\" (UID: \"d25f269d-89d5-442b-b62c-34c7be87fbad\") " Feb 03 12:57:54 crc kubenswrapper[4820]: I0203 12:57:54.798736 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d25f269d-89d5-442b-b62c-34c7be87fbad-utilities" (OuterVolumeSpecName: "utilities") pod "d25f269d-89d5-442b-b62c-34c7be87fbad" (UID: "d25f269d-89d5-442b-b62c-34c7be87fbad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:57:54 crc kubenswrapper[4820]: I0203 12:57:54.809385 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d25f269d-89d5-442b-b62c-34c7be87fbad-kube-api-access-q2csn" (OuterVolumeSpecName: "kube-api-access-q2csn") pod "d25f269d-89d5-442b-b62c-34c7be87fbad" (UID: "d25f269d-89d5-442b-b62c-34c7be87fbad"). InnerVolumeSpecName "kube-api-access-q2csn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:57:54 crc kubenswrapper[4820]: I0203 12:57:54.876050 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d25f269d-89d5-442b-b62c-34c7be87fbad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d25f269d-89d5-442b-b62c-34c7be87fbad" (UID: "d25f269d-89d5-442b-b62c-34c7be87fbad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:57:54 crc kubenswrapper[4820]: I0203 12:57:54.899131 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2csn\" (UniqueName: \"kubernetes.io/projected/d25f269d-89d5-442b-b62c-34c7be87fbad-kube-api-access-q2csn\") on node \"crc\" DevicePath \"\"" Feb 03 12:57:54 crc kubenswrapper[4820]: I0203 12:57:54.899180 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d25f269d-89d5-442b-b62c-34c7be87fbad-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:57:54 crc kubenswrapper[4820]: I0203 12:57:54.899193 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d25f269d-89d5-442b-b62c-34c7be87fbad-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.136245 4820 generic.go:334] "Generic (PLEG): container finished" podID="d25f269d-89d5-442b-b62c-34c7be87fbad" containerID="3beee9fe542b56e126a8dcf813e55f3c16b07eb561f92a12f1f3cb837bebff12" exitCode=0 Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.136463 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pt8f5" Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.136542 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pt8f5" event={"ID":"d25f269d-89d5-442b-b62c-34c7be87fbad","Type":"ContainerDied","Data":"3beee9fe542b56e126a8dcf813e55f3c16b07eb561f92a12f1f3cb837bebff12"} Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.137469 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pt8f5" event={"ID":"d25f269d-89d5-442b-b62c-34c7be87fbad","Type":"ContainerDied","Data":"59d04b68bf264f115a0cf25e4e34276bdeb1ef70eccee11a9e435105040302a0"} Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.137539 4820 scope.go:117] "RemoveContainer" containerID="3beee9fe542b56e126a8dcf813e55f3c16b07eb561f92a12f1f3cb837bebff12" Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.188611 4820 scope.go:117] "RemoveContainer" containerID="f231bd34cc6b71e429c95382852161b201cfd8375df69d97fba24bec0db7b7d9" Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.193309 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pt8f5"] Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.211531 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pt8f5"] Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.218254 4820 scope.go:117] "RemoveContainer" containerID="6089398c45c599173b3ff9212ad9dcf19e23400868b40b98119c6cfd08584d2c" Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.543292 4820 scope.go:117] "RemoveContainer" containerID="3beee9fe542b56e126a8dcf813e55f3c16b07eb561f92a12f1f3cb837bebff12" Feb 03 12:57:55 crc kubenswrapper[4820]: E0203 12:57:55.546689 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3beee9fe542b56e126a8dcf813e55f3c16b07eb561f92a12f1f3cb837bebff12\": container with ID starting with 3beee9fe542b56e126a8dcf813e55f3c16b07eb561f92a12f1f3cb837bebff12 not found: ID does not exist" containerID="3beee9fe542b56e126a8dcf813e55f3c16b07eb561f92a12f1f3cb837bebff12" Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.546729 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3beee9fe542b56e126a8dcf813e55f3c16b07eb561f92a12f1f3cb837bebff12"} err="failed to get container status \"3beee9fe542b56e126a8dcf813e55f3c16b07eb561f92a12f1f3cb837bebff12\": rpc error: code = NotFound desc = could not find container \"3beee9fe542b56e126a8dcf813e55f3c16b07eb561f92a12f1f3cb837bebff12\": container with ID starting with 3beee9fe542b56e126a8dcf813e55f3c16b07eb561f92a12f1f3cb837bebff12 not found: ID does not exist" Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.546755 4820 scope.go:117] "RemoveContainer" containerID="f231bd34cc6b71e429c95382852161b201cfd8375df69d97fba24bec0db7b7d9" Feb 03 12:57:55 crc kubenswrapper[4820]: E0203 12:57:55.547179 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f231bd34cc6b71e429c95382852161b201cfd8375df69d97fba24bec0db7b7d9\": container with ID starting with f231bd34cc6b71e429c95382852161b201cfd8375df69d97fba24bec0db7b7d9 not found: ID does not exist" containerID="f231bd34cc6b71e429c95382852161b201cfd8375df69d97fba24bec0db7b7d9" Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.547204 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f231bd34cc6b71e429c95382852161b201cfd8375df69d97fba24bec0db7b7d9"} err="failed to get container status \"f231bd34cc6b71e429c95382852161b201cfd8375df69d97fba24bec0db7b7d9\": rpc error: code = NotFound desc = could not find container \"f231bd34cc6b71e429c95382852161b201cfd8375df69d97fba24bec0db7b7d9\": container with ID starting with f231bd34cc6b71e429c95382852161b201cfd8375df69d97fba24bec0db7b7d9 not found: ID does not exist" Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.547223 4820 scope.go:117] "RemoveContainer" containerID="6089398c45c599173b3ff9212ad9dcf19e23400868b40b98119c6cfd08584d2c" Feb 03 12:57:55 crc kubenswrapper[4820]: E0203 12:57:55.547627 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6089398c45c599173b3ff9212ad9dcf19e23400868b40b98119c6cfd08584d2c\": container with ID starting with 6089398c45c599173b3ff9212ad9dcf19e23400868b40b98119c6cfd08584d2c not found: ID does not exist" containerID="6089398c45c599173b3ff9212ad9dcf19e23400868b40b98119c6cfd08584d2c" Feb 03 12:57:55 crc kubenswrapper[4820]: I0203 12:57:55.547648 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6089398c45c599173b3ff9212ad9dcf19e23400868b40b98119c6cfd08584d2c"} err="failed to get container status \"6089398c45c599173b3ff9212ad9dcf19e23400868b40b98119c6cfd08584d2c\": rpc error: code = NotFound desc = could not find container \"6089398c45c599173b3ff9212ad9dcf19e23400868b40b98119c6cfd08584d2c\": container with ID starting with 6089398c45c599173b3ff9212ad9dcf19e23400868b40b98119c6cfd08584d2c not found: ID does not exist" Feb 03 12:57:57 crc kubenswrapper[4820]: I0203 12:57:57.160056 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d25f269d-89d5-442b-b62c-34c7be87fbad" path="/var/lib/kubelet/pods/d25f269d-89d5-442b-b62c-34c7be87fbad/volumes" Feb 03 12:58:01 crc kubenswrapper[4820]: I0203 12:58:01.143810 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:58:01 crc kubenswrapper[4820]: E0203 12:58:01.144383 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:58:15 crc kubenswrapper[4820]: I0203 12:58:15.143127 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:58:15 crc kubenswrapper[4820]: E0203 12:58:15.144121 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:58:30 crc kubenswrapper[4820]: I0203 12:58:30.142685 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:58:30 crc kubenswrapper[4820]: E0203 12:58:30.143525 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:58:41 crc kubenswrapper[4820]: I0203 12:58:41.143664 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:58:41 crc kubenswrapper[4820]: E0203 12:58:41.144548 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:58:50 crc kubenswrapper[4820]: I0203 12:58:50.869852 4820 generic.go:334] "Generic (PLEG): container finished" podID="9dba6be1-f601-4959-8c1f-791b7fb032b8" containerID="5de8b2716a16a5cefadca765cf0f51ea7f080a381137d9e8e0ba36d5761f7dc1" exitCode=0 Feb 03 12:58:50 crc kubenswrapper[4820]: I0203 12:58:50.869970 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" event={"ID":"9dba6be1-f601-4959-8c1f-791b7fb032b8","Type":"ContainerDied","Data":"5de8b2716a16a5cefadca765cf0f51ea7f080a381137d9e8e0ba36d5761f7dc1"} Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.553542 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.605063 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-telemetry-combined-ca-bundle\") pod \"9dba6be1-f601-4959-8c1f-791b7fb032b8\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.605143 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-0\") pod \"9dba6be1-f601-4959-8c1f-791b7fb032b8\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.605204 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ssh-key-openstack-edpm-ipam\") pod \"9dba6be1-f601-4959-8c1f-791b7fb032b8\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.605225 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-2\") pod \"9dba6be1-f601-4959-8c1f-791b7fb032b8\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.606151 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcdpw\" (UniqueName: \"kubernetes.io/projected/9dba6be1-f601-4959-8c1f-791b7fb032b8-kube-api-access-fcdpw\") pod \"9dba6be1-f601-4959-8c1f-791b7fb032b8\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.606183 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-inventory\") pod \"9dba6be1-f601-4959-8c1f-791b7fb032b8\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.606638 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-1\") pod \"9dba6be1-f601-4959-8c1f-791b7fb032b8\" (UID: \"9dba6be1-f601-4959-8c1f-791b7fb032b8\") " Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.611389 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dba6be1-f601-4959-8c1f-791b7fb032b8-kube-api-access-fcdpw" (OuterVolumeSpecName: "kube-api-access-fcdpw") pod "9dba6be1-f601-4959-8c1f-791b7fb032b8" (UID: "9dba6be1-f601-4959-8c1f-791b7fb032b8"). InnerVolumeSpecName "kube-api-access-fcdpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.638665 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9dba6be1-f601-4959-8c1f-791b7fb032b8" (UID: "9dba6be1-f601-4959-8c1f-791b7fb032b8"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.646819 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "9dba6be1-f601-4959-8c1f-791b7fb032b8" (UID: "9dba6be1-f601-4959-8c1f-791b7fb032b8"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.647197 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-inventory" (OuterVolumeSpecName: "inventory") pod "9dba6be1-f601-4959-8c1f-791b7fb032b8" (UID: "9dba6be1-f601-4959-8c1f-791b7fb032b8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.656597 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9dba6be1-f601-4959-8c1f-791b7fb032b8" (UID: "9dba6be1-f601-4959-8c1f-791b7fb032b8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.674469 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "9dba6be1-f601-4959-8c1f-791b7fb032b8" (UID: "9dba6be1-f601-4959-8c1f-791b7fb032b8"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.678207 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "9dba6be1-f601-4959-8c1f-791b7fb032b8" (UID: "9dba6be1-f601-4959-8c1f-791b7fb032b8"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.709741 4820 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.709787 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.709802 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.709814 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.709827 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcdpw\" (UniqueName: \"kubernetes.io/projected/9dba6be1-f601-4959-8c1f-791b7fb032b8-kube-api-access-fcdpw\") on node \"crc\" DevicePath \"\"" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.709841 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.709852 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9dba6be1-f601-4959-8c1f-791b7fb032b8-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.896663 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" event={"ID":"9dba6be1-f601-4959-8c1f-791b7fb032b8","Type":"ContainerDied","Data":"fa3e1a200207b52bcca7ab6cc33e69ef75613e1c665c2751d15e23b783e6d508"} Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.896717 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa3e1a200207b52bcca7ab6cc33e69ef75613e1c665c2751d15e23b783e6d508" Feb 03 12:58:52 crc kubenswrapper[4820]: I0203 12:58:52.896760 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g98lk" Feb 03 12:58:53 crc kubenswrapper[4820]: E0203 12:58:53.018617 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dba6be1_f601_4959_8c1f_791b7fb032b8.slice\": RecentStats: unable to find data in memory cache]" Feb 03 12:58:56 crc kubenswrapper[4820]: I0203 12:58:56.143698 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:58:56 crc kubenswrapper[4820]: E0203 12:58:56.144676 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:58:57 crc kubenswrapper[4820]: I0203 12:58:57.987768 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-646ccfdf87-kdlkr" podUID="e530e04a-6fa7-4cc2-be2a-46a26eec64a5" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 03 12:59:10 crc kubenswrapper[4820]: I0203 12:59:10.143688 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:59:10 crc kubenswrapper[4820]: E0203 12:59:10.144771 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:59:21 crc kubenswrapper[4820]: I0203 12:59:21.144934 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:59:21 crc kubenswrapper[4820]: E0203 12:59:21.145774 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 12:59:34 crc kubenswrapper[4820]: I0203 12:59:34.143437 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 12:59:35 crc kubenswrapper[4820]: I0203 12:59:35.154823 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"6360892bd392241289886290481623c9bd92bace474c07410253fed83ab05298"} Feb 03 12:59:36 crc kubenswrapper[4820]: I0203 12:59:36.311313 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:59:36 crc kubenswrapper[4820]: I0203 12:59:36.312064 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerName="prometheus" containerID="cri-o://1b83b16b953f35ac683a42f9df773b773b85c664aa19af779b648cf193bddfb5" gracePeriod=600 Feb 03 12:59:36 crc kubenswrapper[4820]: I0203 12:59:36.312196 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerName="config-reloader" containerID="cri-o://371f09d729ac39ba178842592d0e3292231fdc93369935a7b6ea07621067ede6" gracePeriod=600 Feb 03 12:59:36 crc kubenswrapper[4820]: I0203 12:59:36.312198 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerName="thanos-sidecar" containerID="cri-o://20dd722d66e9625364bf54e86deeb632a5b1f627dff6e3f5f890ff2dd6b81942" gracePeriod=600 Feb 03 12:59:37 crc kubenswrapper[4820]: I0203 12:59:37.179373 4820 generic.go:334] "Generic (PLEG): container finished" podID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerID="20dd722d66e9625364bf54e86deeb632a5b1f627dff6e3f5f890ff2dd6b81942" exitCode=0 Feb 03 12:59:37 crc kubenswrapper[4820]: I0203 12:59:37.179691 4820 generic.go:334] "Generic (PLEG): container finished" podID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerID="371f09d729ac39ba178842592d0e3292231fdc93369935a7b6ea07621067ede6" exitCode=0 Feb 03 12:59:37 crc kubenswrapper[4820]: I0203 12:59:37.179706 4820 generic.go:334] "Generic (PLEG): container finished" podID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerID="1b83b16b953f35ac683a42f9df773b773b85c664aa19af779b648cf193bddfb5" exitCode=0 Feb 03 12:59:37 crc kubenswrapper[4820]: I0203 12:59:37.179441 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0658b201-7c4e-4d71-ba2d-c2cb5dee1553","Type":"ContainerDied","Data":"20dd722d66e9625364bf54e86deeb632a5b1f627dff6e3f5f890ff2dd6b81942"} Feb 03 12:59:37 crc kubenswrapper[4820]: I0203 12:59:37.179757 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0658b201-7c4e-4d71-ba2d-c2cb5dee1553","Type":"ContainerDied","Data":"371f09d729ac39ba178842592d0e3292231fdc93369935a7b6ea07621067ede6"} Feb 03 12:59:37 crc kubenswrapper[4820]: I0203 12:59:37.179775 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0658b201-7c4e-4d71-ba2d-c2cb5dee1553","Type":"ContainerDied","Data":"1b83b16b953f35ac683a42f9df773b773b85c664aa19af779b648cf193bddfb5"} Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.008415 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.105372 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgftf\" (UniqueName: \"kubernetes.io/projected/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-kube-api-access-vgftf\") pod \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.105477 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.105518 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-thanos-prometheus-http-client-file\") pod \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.105552 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-config-out\") pod \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.105577 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-secret-combined-ca-bundle\") pod \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.105603 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-1\") pod \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.105659 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.105679 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-tls-assets\") pod \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.105776 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-0\") pod \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.106120 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") pod \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.106160 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-2\") pod \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.106217 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config\") pod \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.106259 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-config\") pod \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\" (UID: \"0658b201-7c4e-4d71-ba2d-c2cb5dee1553\") " Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.107805 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "0658b201-7c4e-4d71-ba2d-c2cb5dee1553" (UID: "0658b201-7c4e-4d71-ba2d-c2cb5dee1553"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.113223 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "0658b201-7c4e-4d71-ba2d-c2cb5dee1553" (UID: "0658b201-7c4e-4d71-ba2d-c2cb5dee1553"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.114328 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "0658b201-7c4e-4d71-ba2d-c2cb5dee1553" (UID: "0658b201-7c4e-4d71-ba2d-c2cb5dee1553"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.117295 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-kube-api-access-vgftf" (OuterVolumeSpecName: "kube-api-access-vgftf") pod "0658b201-7c4e-4d71-ba2d-c2cb5dee1553" (UID: "0658b201-7c4e-4d71-ba2d-c2cb5dee1553"). InnerVolumeSpecName "kube-api-access-vgftf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.121012 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "0658b201-7c4e-4d71-ba2d-c2cb5dee1553" (UID: "0658b201-7c4e-4d71-ba2d-c2cb5dee1553"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.121254 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "0658b201-7c4e-4d71-ba2d-c2cb5dee1553" (UID: "0658b201-7c4e-4d71-ba2d-c2cb5dee1553"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.122648 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-config-out" (OuterVolumeSpecName: "config-out") pod "0658b201-7c4e-4d71-ba2d-c2cb5dee1553" (UID: "0658b201-7c4e-4d71-ba2d-c2cb5dee1553"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.124777 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "0658b201-7c4e-4d71-ba2d-c2cb5dee1553" (UID: "0658b201-7c4e-4d71-ba2d-c2cb5dee1553"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.125168 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "0658b201-7c4e-4d71-ba2d-c2cb5dee1553" (UID: "0658b201-7c4e-4d71-ba2d-c2cb5dee1553"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.126178 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-config" (OuterVolumeSpecName: "config") pod "0658b201-7c4e-4d71-ba2d-c2cb5dee1553" (UID: "0658b201-7c4e-4d71-ba2d-c2cb5dee1553"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.130372 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "0658b201-7c4e-4d71-ba2d-c2cb5dee1553" (UID: "0658b201-7c4e-4d71-ba2d-c2cb5dee1553"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.155113 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "0658b201-7c4e-4d71-ba2d-c2cb5dee1553" (UID: "0658b201-7c4e-4d71-ba2d-c2cb5dee1553"). InnerVolumeSpecName "pvc-9d241bdd-b4a8-44a7-af98-0d864047887a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.202906 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0658b201-7c4e-4d71-ba2d-c2cb5dee1553","Type":"ContainerDied","Data":"018c7f8d49035e2389fbe253cc45d03b2e94d45ec847c8e01a5bbf78491681d1"} Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.202975 4820 scope.go:117] "RemoveContainer" containerID="20dd722d66e9625364bf54e86deeb632a5b1f627dff6e3f5f890ff2dd6b81942" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.203200 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.215556 4820 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.215585 4820 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.215599 4820 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.215615 4820 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.215624 4820 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.215660 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") on node \"crc\" " Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.215670 4820 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.215680 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.215688 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgftf\" (UniqueName: \"kubernetes.io/projected/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-kube-api-access-vgftf\") on node \"crc\" DevicePath \"\"" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.215698 4820 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.215707 4820 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.215715 4820 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-config-out\") on node \"crc\" DevicePath \"\"" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.236168 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config" (OuterVolumeSpecName: "web-config") pod "0658b201-7c4e-4d71-ba2d-c2cb5dee1553" (UID: "0658b201-7c4e-4d71-ba2d-c2cb5dee1553"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.255958 4820 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.256212 4820 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9d241bdd-b4a8-44a7-af98-0d864047887a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a") on node "crc" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.317749 4820 reconciler_common.go:293] "Volume detached for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") on node \"crc\" DevicePath \"\"" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.318003 4820 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0658b201-7c4e-4d71-ba2d-c2cb5dee1553-web-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.323082 4820 scope.go:117] "RemoveContainer" containerID="371f09d729ac39ba178842592d0e3292231fdc93369935a7b6ea07621067ede6" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.353591 4820 scope.go:117] "RemoveContainer" containerID="1b83b16b953f35ac683a42f9df773b773b85c664aa19af779b648cf193bddfb5" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.377768 4820 scope.go:117] "RemoveContainer" containerID="203e10bb8ec4cd8d38391a67e6322fed75f528ffc84047efd3a54eb07c57c7ab" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.557235 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.567967 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.631377 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:59:38 crc kubenswrapper[4820]: E0203 12:59:38.632089 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerName="config-reloader" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.632138 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerName="config-reloader" Feb 03 12:59:38 crc kubenswrapper[4820]: E0203 12:59:38.632159 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d25f269d-89d5-442b-b62c-34c7be87fbad" containerName="registry-server" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.632167 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25f269d-89d5-442b-b62c-34c7be87fbad" containerName="registry-server" Feb 03 12:59:38 crc kubenswrapper[4820]: E0203 12:59:38.632179 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerName="thanos-sidecar" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.632187 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerName="thanos-sidecar" Feb 03 12:59:38 crc kubenswrapper[4820]: E0203 12:59:38.632202 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d25f269d-89d5-442b-b62c-34c7be87fbad" containerName="extract-utilities" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.632211 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25f269d-89d5-442b-b62c-34c7be87fbad" containerName="extract-utilities" Feb 03 12:59:38 crc kubenswrapper[4820]: E0203 12:59:38.632247 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dba6be1-f601-4959-8c1f-791b7fb032b8" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.632255 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dba6be1-f601-4959-8c1f-791b7fb032b8" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 12:59:38 crc kubenswrapper[4820]: E0203 12:59:38.632275 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d25f269d-89d5-442b-b62c-34c7be87fbad" containerName="extract-content" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.632281 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25f269d-89d5-442b-b62c-34c7be87fbad" containerName="extract-content" Feb 03 12:59:38 crc kubenswrapper[4820]: E0203 12:59:38.632297 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerName="init-config-reloader" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.632305 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerName="init-config-reloader" Feb 03 12:59:38 crc kubenswrapper[4820]: E0203 12:59:38.632316 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerName="prometheus" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.632322 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerName="prometheus" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.632595 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d25f269d-89d5-442b-b62c-34c7be87fbad" containerName="registry-server" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.632616 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerName="prometheus" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.632629 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerName="thanos-sidecar" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.632640 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" containerName="config-reloader" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.632655 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dba6be1-f601-4959-8c1f-791b7fb032b8" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.634713 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.640000 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.640333 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.641184 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.641409 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.641557 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.641705 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.642122 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7hkds" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.651084 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.669378 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.736682 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.736737 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.736800 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.736840 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.737144 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.737229 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.737266 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.737312 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.737664 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.737970 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvzqw\" (UniqueName: \"kubernetes.io/projected/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-kube-api-access-fvzqw\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.738158 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.738332 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-config\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.738479 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.840439 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.840841 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.840992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.841137 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.841312 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.841460 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.841549 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.841637 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.841838 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.841975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvzqw\" (UniqueName: \"kubernetes.io/projected/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-kube-api-access-fvzqw\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.842047 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.841981 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.842187 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.842310 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-config\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.842307 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.842638 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.844711 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.845000 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.845261 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.845653 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.845931 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.845966 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f3f5a1a6665956e69b060824525b6e14f682a7b73f5e11dfb7e9e70ac872e663/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.846179 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.846353 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.846950 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.847163 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-config\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.860461 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvzqw\" (UniqueName: \"kubernetes.io/projected/f6a9118b-1d0e-4baf-92ca-c4024a45dd2e-kube-api-access-fvzqw\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.883616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d241bdd-b4a8-44a7-af98-0d864047887a\") pod \"prometheus-metric-storage-0\" (UID: \"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e\") " pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:38 crc kubenswrapper[4820]: I0203 12:59:38.962154 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 03 12:59:39 crc kubenswrapper[4820]: I0203 12:59:39.172279 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0658b201-7c4e-4d71-ba2d-c2cb5dee1553" path="/var/lib/kubelet/pods/0658b201-7c4e-4d71-ba2d-c2cb5dee1553/volumes" Feb 03 12:59:39 crc kubenswrapper[4820]: I0203 12:59:39.283947 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 03 12:59:40 crc kubenswrapper[4820]: I0203 12:59:40.239097 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e","Type":"ContainerStarted","Data":"1bd14e90db77477bc40610b99e67c9d6f415b6c9af09c28f9e2d8b331a638170"} Feb 03 12:59:49 crc kubenswrapper[4820]: I0203 12:59:49.335867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e","Type":"ContainerStarted","Data":"18f54609a77d529319159cc65720f4d77a94841786b10f9d4fec554786ed2c69"} Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.162537 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q"] Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.164339 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.167453 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.171003 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.176140 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q"] Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.214504 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbkcf\" (UniqueName: \"kubernetes.io/projected/21f14082-d158-4532-810b-ac2fa83e4455-kube-api-access-bbkcf\") pod \"collect-profiles-29502060-4cv5q\" (UID: \"21f14082-d158-4532-810b-ac2fa83e4455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.214831 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21f14082-d158-4532-810b-ac2fa83e4455-config-volume\") pod \"collect-profiles-29502060-4cv5q\" (UID: \"21f14082-d158-4532-810b-ac2fa83e4455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.214874 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21f14082-d158-4532-810b-ac2fa83e4455-secret-volume\") pod \"collect-profiles-29502060-4cv5q\" (UID: \"21f14082-d158-4532-810b-ac2fa83e4455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.317939 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbkcf\" (UniqueName: \"kubernetes.io/projected/21f14082-d158-4532-810b-ac2fa83e4455-kube-api-access-bbkcf\") pod \"collect-profiles-29502060-4cv5q\" (UID: \"21f14082-d158-4532-810b-ac2fa83e4455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.318137 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21f14082-d158-4532-810b-ac2fa83e4455-config-volume\") pod \"collect-profiles-29502060-4cv5q\" (UID: \"21f14082-d158-4532-810b-ac2fa83e4455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.318211 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21f14082-d158-4532-810b-ac2fa83e4455-secret-volume\") pod \"collect-profiles-29502060-4cv5q\" (UID: \"21f14082-d158-4532-810b-ac2fa83e4455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.320474 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21f14082-d158-4532-810b-ac2fa83e4455-config-volume\") pod \"collect-profiles-29502060-4cv5q\" (UID: \"21f14082-d158-4532-810b-ac2fa83e4455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.329468 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21f14082-d158-4532-810b-ac2fa83e4455-secret-volume\") pod \"collect-profiles-29502060-4cv5q\" (UID: \"21f14082-d158-4532-810b-ac2fa83e4455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.336329 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbkcf\" (UniqueName: \"kubernetes.io/projected/21f14082-d158-4532-810b-ac2fa83e4455-kube-api-access-bbkcf\") pod \"collect-profiles-29502060-4cv5q\" (UID: \"21f14082-d158-4532-810b-ac2fa83e4455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.484849 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.706306 4820 generic.go:334] "Generic (PLEG): container finished" podID="f6a9118b-1d0e-4baf-92ca-c4024a45dd2e" containerID="18f54609a77d529319159cc65720f4d77a94841786b10f9d4fec554786ed2c69" exitCode=0 Feb 03 13:00:00 crc kubenswrapper[4820]: I0203 13:00:00.706644 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e","Type":"ContainerDied","Data":"18f54609a77d529319159cc65720f4d77a94841786b10f9d4fec554786ed2c69"} Feb 03 13:00:01 crc kubenswrapper[4820]: I0203 13:00:01.191516 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q"] Feb 03 13:00:01 crc kubenswrapper[4820]: W0203 13:00:01.202104 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21f14082_d158_4532_810b_ac2fa83e4455.slice/crio-74b5782e4ae3290baaed016020b809b8821bb94890bb1aa056cf3032d7187341 WatchSource:0}: Error finding container 74b5782e4ae3290baaed016020b809b8821bb94890bb1aa056cf3032d7187341: Status 404 returned error can't find the container with id 74b5782e4ae3290baaed016020b809b8821bb94890bb1aa056cf3032d7187341 Feb 03 13:00:01 crc kubenswrapper[4820]: I0203 13:00:01.717203 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" event={"ID":"21f14082-d158-4532-810b-ac2fa83e4455","Type":"ContainerStarted","Data":"fa8d40271f6afa031e706b87372fd2dec63b7292f4ec3ce299c5dd85b8f9af81"} Feb 03 13:00:01 crc kubenswrapper[4820]: I0203 13:00:01.717588 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" event={"ID":"21f14082-d158-4532-810b-ac2fa83e4455","Type":"ContainerStarted","Data":"74b5782e4ae3290baaed016020b809b8821bb94890bb1aa056cf3032d7187341"} Feb 03 13:00:01 crc kubenswrapper[4820]: I0203 13:00:01.719833 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e","Type":"ContainerStarted","Data":"4253ed85a365f435d968c7558b0ce0ed98f13c3abc39ad61a23ed866401cde76"} Feb 03 13:00:01 crc kubenswrapper[4820]: I0203 13:00:01.740610 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" podStartSLOduration=1.740575956 podStartE2EDuration="1.740575956s" podCreationTimestamp="2026-02-03 13:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:00:01.736443806 +0000 UTC m=+3319.259519670" watchObservedRunningTime="2026-02-03 13:00:01.740575956 +0000 UTC m=+3319.263651810" Feb 03 13:00:02 crc kubenswrapper[4820]: I0203 13:00:02.730475 4820 generic.go:334] "Generic (PLEG): container finished" podID="21f14082-d158-4532-810b-ac2fa83e4455" containerID="fa8d40271f6afa031e706b87372fd2dec63b7292f4ec3ce299c5dd85b8f9af81" exitCode=0 Feb 03 13:00:02 crc kubenswrapper[4820]: I0203 13:00:02.730527 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" event={"ID":"21f14082-d158-4532-810b-ac2fa83e4455","Type":"ContainerDied","Data":"fa8d40271f6afa031e706b87372fd2dec63b7292f4ec3ce299c5dd85b8f9af81"} Feb 03 13:00:04 crc kubenswrapper[4820]: I0203 13:00:04.361042 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" Feb 03 13:00:04 crc kubenswrapper[4820]: I0203 13:00:04.526391 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21f14082-d158-4532-810b-ac2fa83e4455-config-volume\") pod \"21f14082-d158-4532-810b-ac2fa83e4455\" (UID: \"21f14082-d158-4532-810b-ac2fa83e4455\") " Feb 03 13:00:04 crc kubenswrapper[4820]: I0203 13:00:04.526559 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbkcf\" (UniqueName: \"kubernetes.io/projected/21f14082-d158-4532-810b-ac2fa83e4455-kube-api-access-bbkcf\") pod \"21f14082-d158-4532-810b-ac2fa83e4455\" (UID: \"21f14082-d158-4532-810b-ac2fa83e4455\") " Feb 03 13:00:04 crc kubenswrapper[4820]: I0203 13:00:04.526831 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21f14082-d158-4532-810b-ac2fa83e4455-secret-volume\") pod \"21f14082-d158-4532-810b-ac2fa83e4455\" (UID: \"21f14082-d158-4532-810b-ac2fa83e4455\") " Feb 03 13:00:04 crc kubenswrapper[4820]: I0203 13:00:04.527065 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21f14082-d158-4532-810b-ac2fa83e4455-config-volume" (OuterVolumeSpecName: "config-volume") pod "21f14082-d158-4532-810b-ac2fa83e4455" (UID: "21f14082-d158-4532-810b-ac2fa83e4455"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:00:04 crc kubenswrapper[4820]: I0203 13:00:04.527404 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21f14082-d158-4532-810b-ac2fa83e4455-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:00:04 crc kubenswrapper[4820]: I0203 13:00:04.532537 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f14082-d158-4532-810b-ac2fa83e4455-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "21f14082-d158-4532-810b-ac2fa83e4455" (UID: "21f14082-d158-4532-810b-ac2fa83e4455"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:00:04 crc kubenswrapper[4820]: I0203 13:00:04.532785 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f14082-d158-4532-810b-ac2fa83e4455-kube-api-access-bbkcf" (OuterVolumeSpecName: "kube-api-access-bbkcf") pod "21f14082-d158-4532-810b-ac2fa83e4455" (UID: "21f14082-d158-4532-810b-ac2fa83e4455"). InnerVolumeSpecName "kube-api-access-bbkcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:00:04 crc kubenswrapper[4820]: I0203 13:00:04.629744 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21f14082-d158-4532-810b-ac2fa83e4455-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:00:04 crc kubenswrapper[4820]: I0203 13:00:04.629798 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbkcf\" (UniqueName: \"kubernetes.io/projected/21f14082-d158-4532-810b-ac2fa83e4455-kube-api-access-bbkcf\") on node \"crc\" DevicePath \"\"" Feb 03 13:00:04 crc kubenswrapper[4820]: I0203 13:00:04.770340 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" event={"ID":"21f14082-d158-4532-810b-ac2fa83e4455","Type":"ContainerDied","Data":"74b5782e4ae3290baaed016020b809b8821bb94890bb1aa056cf3032d7187341"} Feb 03 13:00:04 crc kubenswrapper[4820]: I0203 13:00:04.770442 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74b5782e4ae3290baaed016020b809b8821bb94890bb1aa056cf3032d7187341" Feb 03 13:00:04 crc kubenswrapper[4820]: I0203 13:00:04.770633 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q" Feb 03 13:00:04 crc kubenswrapper[4820]: E0203 13:00:04.940732 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21f14082_d158_4532_810b_ac2fa83e4455.slice\": RecentStats: unable to find data in memory cache]" Feb 03 13:00:05 crc kubenswrapper[4820]: I0203 13:00:05.457685 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv"] Feb 03 13:00:05 crc kubenswrapper[4820]: I0203 13:00:05.466949 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502015-6rqhv"] Feb 03 13:00:05 crc kubenswrapper[4820]: I0203 13:00:05.781658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e","Type":"ContainerStarted","Data":"003c0d3c4edd5f5f2c6760d3a7099c32810f7e5bdb759cb3a86897acbc83aa6e"} Feb 03 13:00:05 crc kubenswrapper[4820]: I0203 13:00:05.781706 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f6a9118b-1d0e-4baf-92ca-c4024a45dd2e","Type":"ContainerStarted","Data":"48e116d0f45c8829ceda23add89437ac2b460a5cf7e55d25e97f5a9461bb4242"} Feb 03 13:00:05 crc kubenswrapper[4820]: I0203 13:00:05.827563 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=27.827535496 podStartE2EDuration="27.827535496s" podCreationTimestamp="2026-02-03 12:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:00:05.818642989 +0000 UTC m=+3323.341718863" watchObservedRunningTime="2026-02-03 13:00:05.827535496 +0000 UTC m=+3323.350611370" Feb 03 13:00:07 crc kubenswrapper[4820]: I0203 13:00:07.161733 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca46b09-00ea-4c46-b9d1-3a297633f397" path="/var/lib/kubelet/pods/eca46b09-00ea-4c46-b9d1-3a297633f397/volumes" Feb 03 13:00:08 crc kubenswrapper[4820]: I0203 13:00:08.962374 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 03 13:00:08 crc kubenswrapper[4820]: I0203 13:00:08.962626 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 03 13:00:08 crc kubenswrapper[4820]: I0203 13:00:08.970050 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 03 13:00:09 crc kubenswrapper[4820]: I0203 13:00:09.819456 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 03 13:00:29 crc kubenswrapper[4820]: I0203 13:00:29.992843 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 13:00:29 crc kubenswrapper[4820]: E0203 13:00:29.993715 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21f14082-d158-4532-810b-ac2fa83e4455" containerName="collect-profiles" Feb 03 13:00:29 crc kubenswrapper[4820]: I0203 13:00:29.993730 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f14082-d158-4532-810b-ac2fa83e4455" containerName="collect-profiles" Feb 03 13:00:29 crc kubenswrapper[4820]: I0203 13:00:29.993939 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="21f14082-d158-4532-810b-ac2fa83e4455" containerName="collect-profiles" Feb 03 13:00:29 crc kubenswrapper[4820]: I0203 13:00:29.996568 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.001468 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.001524 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.001629 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.001478 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-brtb9" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.013573 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.130180 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f4wq\" (UniqueName: \"kubernetes.io/projected/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-kube-api-access-2f4wq\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.130439 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.130733 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.130859 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.130966 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.130997 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.131016 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-config-data\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.131080 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.131105 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.233176 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.233335 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.233406 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.234071 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.234175 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.234227 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-config-data\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.234364 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.234372 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.234420 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.234513 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f4wq\" (UniqueName: \"kubernetes.io/projected/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-kube-api-access-2f4wq\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.238069 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.238072 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.238684 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.239827 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.240257 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.240601 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.262331 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-config-data\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.270282 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f4wq\" (UniqueName: \"kubernetes.io/projected/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-kube-api-access-2f4wq\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.297111 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.320384 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.817175 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 13:00:30 crc kubenswrapper[4820]: W0203 13:00:30.821807 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda52d7dcc_1107_47d1_b270_0601e9dc2b1b.slice/crio-d44e57f0d0a177d690dceb39812d340ad88e959cc7a762ac253bb5230e006d7a WatchSource:0}: Error finding container d44e57f0d0a177d690dceb39812d340ad88e959cc7a762ac253bb5230e006d7a: Status 404 returned error can't find the container with id d44e57f0d0a177d690dceb39812d340ad88e959cc7a762ac253bb5230e006d7a Feb 03 13:00:30 crc kubenswrapper[4820]: I0203 13:00:30.826147 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:00:31 crc kubenswrapper[4820]: I0203 13:00:31.020304 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a52d7dcc-1107-47d1-b270-0601e9dc2b1b","Type":"ContainerStarted","Data":"d44e57f0d0a177d690dceb39812d340ad88e959cc7a762ac253bb5230e006d7a"} Feb 03 13:00:31 crc kubenswrapper[4820]: I0203 13:00:31.143444 4820 scope.go:117] "RemoveContainer" containerID="3864b7c76f9063a1034a8c3b5ddcd22fa6251d83edf96069fdde52222f0ee0d2" Feb 03 13:00:44 crc kubenswrapper[4820]: E0203 13:00:44.795350 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest" Feb 03 13:00:44 crc kubenswrapper[4820]: E0203 13:00:44.796050 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest" Feb 03 13:00:44 crc kubenswrapper[4820]: E0203 13:00:44.796423 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:38.102.83.50:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2f4wq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a52d7dcc-1107-47d1-b270-0601e9dc2b1b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 13:00:44 crc kubenswrapper[4820]: E0203 13:00:44.797544 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a52d7dcc-1107-47d1-b270-0601e9dc2b1b" Feb 03 13:00:45 crc kubenswrapper[4820]: E0203 13:00:45.506038 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.50:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a52d7dcc-1107-47d1-b270-0601e9dc2b1b" Feb 03 13:00:59 crc kubenswrapper[4820]: I0203 13:00:59.725029 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.161399 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29502061-76zjl"] Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.162712 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.177537 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29502061-76zjl"] Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.317361 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-fernet-keys\") pod \"keystone-cron-29502061-76zjl\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.317697 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-combined-ca-bundle\") pod \"keystone-cron-29502061-76zjl\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.318080 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpz65\" (UniqueName: \"kubernetes.io/projected/fe4eea03-b3c4-427a-acc9-7b73142f1723-kube-api-access-jpz65\") pod \"keystone-cron-29502061-76zjl\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.318573 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-config-data\") pod \"keystone-cron-29502061-76zjl\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.421619 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-fernet-keys\") pod \"keystone-cron-29502061-76zjl\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.421664 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-combined-ca-bundle\") pod \"keystone-cron-29502061-76zjl\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.422464 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpz65\" (UniqueName: \"kubernetes.io/projected/fe4eea03-b3c4-427a-acc9-7b73142f1723-kube-api-access-jpz65\") pod \"keystone-cron-29502061-76zjl\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.422565 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-config-data\") pod \"keystone-cron-29502061-76zjl\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.431014 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-combined-ca-bundle\") pod \"keystone-cron-29502061-76zjl\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.431122 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-config-data\") pod \"keystone-cron-29502061-76zjl\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.432631 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-fernet-keys\") pod \"keystone-cron-29502061-76zjl\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.453822 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpz65\" (UniqueName: \"kubernetes.io/projected/fe4eea03-b3c4-427a-acc9-7b73142f1723-kube-api-access-jpz65\") pod \"keystone-cron-29502061-76zjl\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.487809 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:00 crc kubenswrapper[4820]: W0203 13:01:00.964421 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe4eea03_b3c4_427a_acc9_7b73142f1723.slice/crio-10706ffdc6c22381838f9607e66b83f1c33e235ca7ff89c44edecc6e00c5f882 WatchSource:0}: Error finding container 10706ffdc6c22381838f9607e66b83f1c33e235ca7ff89c44edecc6e00c5f882: Status 404 returned error can't find the container with id 10706ffdc6c22381838f9607e66b83f1c33e235ca7ff89c44edecc6e00c5f882 Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.964442 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29502061-76zjl"] Feb 03 13:01:00 crc kubenswrapper[4820]: I0203 13:01:00.991481 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502061-76zjl" event={"ID":"fe4eea03-b3c4-427a-acc9-7b73142f1723","Type":"ContainerStarted","Data":"10706ffdc6c22381838f9607e66b83f1c33e235ca7ff89c44edecc6e00c5f882"} Feb 03 13:01:01 crc kubenswrapper[4820]: I0203 13:01:01.001139 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a52d7dcc-1107-47d1-b270-0601e9dc2b1b","Type":"ContainerStarted","Data":"06c43d26a46f211d8df4b5f1113886b401332ce6aa4cc388dd3f4ae0154ab738"} Feb 03 13:01:01 crc kubenswrapper[4820]: I0203 13:01:01.038935 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.143093464 podStartE2EDuration="33.038908949s" podCreationTimestamp="2026-02-03 13:00:28 +0000 UTC" firstStartedPulling="2026-02-03 13:00:30.825769292 +0000 UTC m=+3348.348845156" lastFinishedPulling="2026-02-03 13:00:59.721584767 +0000 UTC m=+3377.244660641" observedRunningTime="2026-02-03 13:01:01.027855724 +0000 UTC m=+3378.550931598" watchObservedRunningTime="2026-02-03 13:01:01.038908949 +0000 UTC m=+3378.561984813" Feb 03 13:01:02 crc kubenswrapper[4820]: I0203 13:01:02.016011 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502061-76zjl" event={"ID":"fe4eea03-b3c4-427a-acc9-7b73142f1723","Type":"ContainerStarted","Data":"0d7a82d9d52d05cfb6b41515b9aa8a4e2d5e8f32134d7ddb0410a046316dff7a"} Feb 03 13:01:02 crc kubenswrapper[4820]: I0203 13:01:02.041630 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29502061-76zjl" podStartSLOduration=2.041606831 podStartE2EDuration="2.041606831s" podCreationTimestamp="2026-02-03 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:01:02.037282605 +0000 UTC m=+3379.560358479" watchObservedRunningTime="2026-02-03 13:01:02.041606831 +0000 UTC m=+3379.564682695" Feb 03 13:01:05 crc kubenswrapper[4820]: I0203 13:01:05.123790 4820 generic.go:334] "Generic (PLEG): container finished" podID="fe4eea03-b3c4-427a-acc9-7b73142f1723" containerID="0d7a82d9d52d05cfb6b41515b9aa8a4e2d5e8f32134d7ddb0410a046316dff7a" exitCode=0 Feb 03 13:01:05 crc kubenswrapper[4820]: I0203 13:01:05.124143 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502061-76zjl" event={"ID":"fe4eea03-b3c4-427a-acc9-7b73142f1723","Type":"ContainerDied","Data":"0d7a82d9d52d05cfb6b41515b9aa8a4e2d5e8f32134d7ddb0410a046316dff7a"} Feb 03 13:01:06 crc kubenswrapper[4820]: I0203 13:01:06.844575 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.014510 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpz65\" (UniqueName: \"kubernetes.io/projected/fe4eea03-b3c4-427a-acc9-7b73142f1723-kube-api-access-jpz65\") pod \"fe4eea03-b3c4-427a-acc9-7b73142f1723\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.014662 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-fernet-keys\") pod \"fe4eea03-b3c4-427a-acc9-7b73142f1723\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.014807 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-combined-ca-bundle\") pod \"fe4eea03-b3c4-427a-acc9-7b73142f1723\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.015060 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-config-data\") pod \"fe4eea03-b3c4-427a-acc9-7b73142f1723\" (UID: \"fe4eea03-b3c4-427a-acc9-7b73142f1723\") " Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.025721 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fe4eea03-b3c4-427a-acc9-7b73142f1723" (UID: "fe4eea03-b3c4-427a-acc9-7b73142f1723"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.025845 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe4eea03-b3c4-427a-acc9-7b73142f1723-kube-api-access-jpz65" (OuterVolumeSpecName: "kube-api-access-jpz65") pod "fe4eea03-b3c4-427a-acc9-7b73142f1723" (UID: "fe4eea03-b3c4-427a-acc9-7b73142f1723"). InnerVolumeSpecName "kube-api-access-jpz65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.053450 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe4eea03-b3c4-427a-acc9-7b73142f1723" (UID: "fe4eea03-b3c4-427a-acc9-7b73142f1723"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.083831 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-config-data" (OuterVolumeSpecName: "config-data") pod "fe4eea03-b3c4-427a-acc9-7b73142f1723" (UID: "fe4eea03-b3c4-427a-acc9-7b73142f1723"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.117515 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.117565 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpz65\" (UniqueName: \"kubernetes.io/projected/fe4eea03-b3c4-427a-acc9-7b73142f1723-kube-api-access-jpz65\") on node \"crc\" DevicePath \"\"" Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.117579 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.117588 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4eea03-b3c4-427a-acc9-7b73142f1723-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.153365 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502061-76zjl" Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.157835 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502061-76zjl" event={"ID":"fe4eea03-b3c4-427a-acc9-7b73142f1723","Type":"ContainerDied","Data":"10706ffdc6c22381838f9607e66b83f1c33e235ca7ff89c44edecc6e00c5f882"} Feb 03 13:01:07 crc kubenswrapper[4820]: I0203 13:01:07.157878 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10706ffdc6c22381838f9607e66b83f1c33e235ca7ff89c44edecc6e00c5f882" Feb 03 13:02:01 crc kubenswrapper[4820]: I0203 13:02:01.538635 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:02:01 crc kubenswrapper[4820]: I0203 13:02:01.539210 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:02:31 crc kubenswrapper[4820]: I0203 13:02:31.365521 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:02:31 crc kubenswrapper[4820]: I0203 13:02:31.366471 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:03:01 crc kubenswrapper[4820]: I0203 13:03:01.365435 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:03:01 crc kubenswrapper[4820]: I0203 13:03:01.366314 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:03:01 crc kubenswrapper[4820]: I0203 13:03:01.366499 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 13:03:01 crc kubenswrapper[4820]: I0203 13:03:01.367879 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6360892bd392241289886290481623c9bd92bace474c07410253fed83ab05298"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:03:01 crc kubenswrapper[4820]: I0203 13:03:01.367981 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://6360892bd392241289886290481623c9bd92bace474c07410253fed83ab05298" gracePeriod=600 Feb 03 13:03:01 crc kubenswrapper[4820]: I0203 13:03:01.503332 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="6360892bd392241289886290481623c9bd92bace474c07410253fed83ab05298" exitCode=0 Feb 03 13:03:01 crc kubenswrapper[4820]: I0203 13:03:01.503376 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"6360892bd392241289886290481623c9bd92bace474c07410253fed83ab05298"} Feb 03 13:03:01 crc kubenswrapper[4820]: I0203 13:03:01.503420 4820 scope.go:117] "RemoveContainer" containerID="108356beea18d8aae66e65b6c7634daebf0864c97b64061528133a985322eb38" Feb 03 13:03:02 crc kubenswrapper[4820]: I0203 13:03:02.515615 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187"} Feb 03 13:03:05 crc kubenswrapper[4820]: I0203 13:03:05.876372 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zcjjt"] Feb 03 13:03:05 crc kubenswrapper[4820]: E0203 13:03:05.877453 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4eea03-b3c4-427a-acc9-7b73142f1723" containerName="keystone-cron" Feb 03 13:03:05 crc kubenswrapper[4820]: I0203 13:03:05.877492 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4eea03-b3c4-427a-acc9-7b73142f1723" containerName="keystone-cron" Feb 03 13:03:05 crc kubenswrapper[4820]: I0203 13:03:05.877798 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4eea03-b3c4-427a-acc9-7b73142f1723" containerName="keystone-cron" Feb 03 13:03:05 crc kubenswrapper[4820]: I0203 13:03:05.879786 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:05 crc kubenswrapper[4820]: I0203 13:03:05.899530 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zcjjt"] Feb 03 13:03:06 crc kubenswrapper[4820]: I0203 13:03:06.084361 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e339651-119f-4c82-84aa-d6a981283fcb-utilities\") pod \"redhat-operators-zcjjt\" (UID: \"4e339651-119f-4c82-84aa-d6a981283fcb\") " pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:06 crc kubenswrapper[4820]: I0203 13:03:06.084684 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7ddp\" (UniqueName: \"kubernetes.io/projected/4e339651-119f-4c82-84aa-d6a981283fcb-kube-api-access-s7ddp\") pod \"redhat-operators-zcjjt\" (UID: \"4e339651-119f-4c82-84aa-d6a981283fcb\") " pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:06 crc kubenswrapper[4820]: I0203 13:03:06.084808 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e339651-119f-4c82-84aa-d6a981283fcb-catalog-content\") pod \"redhat-operators-zcjjt\" (UID: \"4e339651-119f-4c82-84aa-d6a981283fcb\") " pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:06 crc kubenswrapper[4820]: I0203 13:03:06.186401 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7ddp\" (UniqueName: \"kubernetes.io/projected/4e339651-119f-4c82-84aa-d6a981283fcb-kube-api-access-s7ddp\") pod \"redhat-operators-zcjjt\" (UID: \"4e339651-119f-4c82-84aa-d6a981283fcb\") " pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:06 crc kubenswrapper[4820]: I0203 13:03:06.186552 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e339651-119f-4c82-84aa-d6a981283fcb-catalog-content\") pod \"redhat-operators-zcjjt\" (UID: \"4e339651-119f-4c82-84aa-d6a981283fcb\") " pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:06 crc kubenswrapper[4820]: I0203 13:03:06.186746 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e339651-119f-4c82-84aa-d6a981283fcb-utilities\") pod \"redhat-operators-zcjjt\" (UID: \"4e339651-119f-4c82-84aa-d6a981283fcb\") " pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:06 crc kubenswrapper[4820]: I0203 13:03:06.187366 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e339651-119f-4c82-84aa-d6a981283fcb-utilities\") pod \"redhat-operators-zcjjt\" (UID: \"4e339651-119f-4c82-84aa-d6a981283fcb\") " pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:06 crc kubenswrapper[4820]: I0203 13:03:06.187735 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e339651-119f-4c82-84aa-d6a981283fcb-catalog-content\") pod \"redhat-operators-zcjjt\" (UID: \"4e339651-119f-4c82-84aa-d6a981283fcb\") " pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:06 crc kubenswrapper[4820]: I0203 13:03:06.207454 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7ddp\" (UniqueName: \"kubernetes.io/projected/4e339651-119f-4c82-84aa-d6a981283fcb-kube-api-access-s7ddp\") pod \"redhat-operators-zcjjt\" (UID: \"4e339651-119f-4c82-84aa-d6a981283fcb\") " pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:06 crc kubenswrapper[4820]: I0203 13:03:06.395316 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:06 crc kubenswrapper[4820]: I0203 13:03:06.938438 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zcjjt"] Feb 03 13:03:07 crc kubenswrapper[4820]: I0203 13:03:07.583844 4820 generic.go:334] "Generic (PLEG): container finished" podID="4e339651-119f-4c82-84aa-d6a981283fcb" containerID="5b3c50c0fd07949195efc42e1fb3421bdba2aeddef21f0228553736636869594" exitCode=0 Feb 03 13:03:07 crc kubenswrapper[4820]: I0203 13:03:07.583947 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcjjt" event={"ID":"4e339651-119f-4c82-84aa-d6a981283fcb","Type":"ContainerDied","Data":"5b3c50c0fd07949195efc42e1fb3421bdba2aeddef21f0228553736636869594"} Feb 03 13:03:07 crc kubenswrapper[4820]: I0203 13:03:07.584220 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcjjt" event={"ID":"4e339651-119f-4c82-84aa-d6a981283fcb","Type":"ContainerStarted","Data":"35211db29b046eb17135373ce8723c1c76f6d15ae8e0f261e456126cceb8109f"} Feb 03 13:03:08 crc kubenswrapper[4820]: I0203 13:03:08.597299 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcjjt" event={"ID":"4e339651-119f-4c82-84aa-d6a981283fcb","Type":"ContainerStarted","Data":"a7fc2460e64cd03c651045acb478be8d099e80e6519071ece991562032acb91f"} Feb 03 13:03:11 crc kubenswrapper[4820]: I0203 13:03:11.628291 4820 generic.go:334] "Generic (PLEG): container finished" podID="4e339651-119f-4c82-84aa-d6a981283fcb" containerID="a7fc2460e64cd03c651045acb478be8d099e80e6519071ece991562032acb91f" exitCode=0 Feb 03 13:03:11 crc kubenswrapper[4820]: I0203 13:03:11.628363 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcjjt" event={"ID":"4e339651-119f-4c82-84aa-d6a981283fcb","Type":"ContainerDied","Data":"a7fc2460e64cd03c651045acb478be8d099e80e6519071ece991562032acb91f"} Feb 03 13:03:12 crc kubenswrapper[4820]: I0203 13:03:12.648477 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcjjt" event={"ID":"4e339651-119f-4c82-84aa-d6a981283fcb","Type":"ContainerStarted","Data":"245c050c733736e05b24bf603154bff86e188f79f78e4ff6d1c9ae81dc8b52ab"} Feb 03 13:03:12 crc kubenswrapper[4820]: I0203 13:03:12.680587 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zcjjt" podStartSLOduration=3.206496234 podStartE2EDuration="7.680551961s" podCreationTimestamp="2026-02-03 13:03:05 +0000 UTC" firstStartedPulling="2026-02-03 13:03:07.587053315 +0000 UTC m=+3505.110129179" lastFinishedPulling="2026-02-03 13:03:12.061109042 +0000 UTC m=+3509.584184906" observedRunningTime="2026-02-03 13:03:12.67006306 +0000 UTC m=+3510.193138934" watchObservedRunningTime="2026-02-03 13:03:12.680551961 +0000 UTC m=+3510.203627845" Feb 03 13:03:16 crc kubenswrapper[4820]: I0203 13:03:16.396671 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:16 crc kubenswrapper[4820]: I0203 13:03:16.397550 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:17 crc kubenswrapper[4820]: I0203 13:03:17.450189 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zcjjt" podUID="4e339651-119f-4c82-84aa-d6a981283fcb" containerName="registry-server" probeResult="failure" output=< Feb 03 13:03:17 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 13:03:17 crc kubenswrapper[4820]: > Feb 03 13:03:26 crc kubenswrapper[4820]: I0203 13:03:26.452143 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:26 crc kubenswrapper[4820]: I0203 13:03:26.505555 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:26 crc kubenswrapper[4820]: I0203 13:03:26.693424 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zcjjt"] Feb 03 13:03:27 crc kubenswrapper[4820]: I0203 13:03:27.964550 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zcjjt" podUID="4e339651-119f-4c82-84aa-d6a981283fcb" containerName="registry-server" containerID="cri-o://245c050c733736e05b24bf603154bff86e188f79f78e4ff6d1c9ae81dc8b52ab" gracePeriod=2 Feb 03 13:03:28 crc kubenswrapper[4820]: I0203 13:03:28.545793 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.015419 4820 generic.go:334] "Generic (PLEG): container finished" podID="4e339651-119f-4c82-84aa-d6a981283fcb" containerID="245c050c733736e05b24bf603154bff86e188f79f78e4ff6d1c9ae81dc8b52ab" exitCode=0 Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.015486 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcjjt" event={"ID":"4e339651-119f-4c82-84aa-d6a981283fcb","Type":"ContainerDied","Data":"245c050c733736e05b24bf603154bff86e188f79f78e4ff6d1c9ae81dc8b52ab"} Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.015520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcjjt" event={"ID":"4e339651-119f-4c82-84aa-d6a981283fcb","Type":"ContainerDied","Data":"35211db29b046eb17135373ce8723c1c76f6d15ae8e0f261e456126cceb8109f"} Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.015573 4820 scope.go:117] "RemoveContainer" containerID="245c050c733736e05b24bf603154bff86e188f79f78e4ff6d1c9ae81dc8b52ab" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.015841 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcjjt" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.060280 4820 scope.go:117] "RemoveContainer" containerID="a7fc2460e64cd03c651045acb478be8d099e80e6519071ece991562032acb91f" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.100299 4820 scope.go:117] "RemoveContainer" containerID="5b3c50c0fd07949195efc42e1fb3421bdba2aeddef21f0228553736636869594" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.105785 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e339651-119f-4c82-84aa-d6a981283fcb-utilities\") pod \"4e339651-119f-4c82-84aa-d6a981283fcb\" (UID: \"4e339651-119f-4c82-84aa-d6a981283fcb\") " Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.106027 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7ddp\" (UniqueName: \"kubernetes.io/projected/4e339651-119f-4c82-84aa-d6a981283fcb-kube-api-access-s7ddp\") pod \"4e339651-119f-4c82-84aa-d6a981283fcb\" (UID: \"4e339651-119f-4c82-84aa-d6a981283fcb\") " Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.106123 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e339651-119f-4c82-84aa-d6a981283fcb-catalog-content\") pod \"4e339651-119f-4c82-84aa-d6a981283fcb\" (UID: \"4e339651-119f-4c82-84aa-d6a981283fcb\") " Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.106981 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e339651-119f-4c82-84aa-d6a981283fcb-utilities" (OuterVolumeSpecName: "utilities") pod "4e339651-119f-4c82-84aa-d6a981283fcb" (UID: "4e339651-119f-4c82-84aa-d6a981283fcb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.119938 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e339651-119f-4c82-84aa-d6a981283fcb-kube-api-access-s7ddp" (OuterVolumeSpecName: "kube-api-access-s7ddp") pod "4e339651-119f-4c82-84aa-d6a981283fcb" (UID: "4e339651-119f-4c82-84aa-d6a981283fcb"). InnerVolumeSpecName "kube-api-access-s7ddp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.208364 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e339651-119f-4c82-84aa-d6a981283fcb-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.208421 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7ddp\" (UniqueName: \"kubernetes.io/projected/4e339651-119f-4c82-84aa-d6a981283fcb-kube-api-access-s7ddp\") on node \"crc\" DevicePath \"\"" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.224776 4820 scope.go:117] "RemoveContainer" containerID="245c050c733736e05b24bf603154bff86e188f79f78e4ff6d1c9ae81dc8b52ab" Feb 03 13:03:29 crc kubenswrapper[4820]: E0203 13:03:29.228091 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"245c050c733736e05b24bf603154bff86e188f79f78e4ff6d1c9ae81dc8b52ab\": container with ID starting with 245c050c733736e05b24bf603154bff86e188f79f78e4ff6d1c9ae81dc8b52ab not found: ID does not exist" containerID="245c050c733736e05b24bf603154bff86e188f79f78e4ff6d1c9ae81dc8b52ab" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.228173 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"245c050c733736e05b24bf603154bff86e188f79f78e4ff6d1c9ae81dc8b52ab"} err="failed to get container status \"245c050c733736e05b24bf603154bff86e188f79f78e4ff6d1c9ae81dc8b52ab\": rpc error: code = NotFound desc = could not find container \"245c050c733736e05b24bf603154bff86e188f79f78e4ff6d1c9ae81dc8b52ab\": container with ID starting with 245c050c733736e05b24bf603154bff86e188f79f78e4ff6d1c9ae81dc8b52ab not found: ID does not exist" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.228215 4820 scope.go:117] "RemoveContainer" containerID="a7fc2460e64cd03c651045acb478be8d099e80e6519071ece991562032acb91f" Feb 03 13:03:29 crc kubenswrapper[4820]: E0203 13:03:29.232248 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7fc2460e64cd03c651045acb478be8d099e80e6519071ece991562032acb91f\": container with ID starting with a7fc2460e64cd03c651045acb478be8d099e80e6519071ece991562032acb91f not found: ID does not exist" containerID="a7fc2460e64cd03c651045acb478be8d099e80e6519071ece991562032acb91f" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.232339 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7fc2460e64cd03c651045acb478be8d099e80e6519071ece991562032acb91f"} err="failed to get container status \"a7fc2460e64cd03c651045acb478be8d099e80e6519071ece991562032acb91f\": rpc error: code = NotFound desc = could not find container \"a7fc2460e64cd03c651045acb478be8d099e80e6519071ece991562032acb91f\": container with ID starting with a7fc2460e64cd03c651045acb478be8d099e80e6519071ece991562032acb91f not found: ID does not exist" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.232398 4820 scope.go:117] "RemoveContainer" containerID="5b3c50c0fd07949195efc42e1fb3421bdba2aeddef21f0228553736636869594" Feb 03 13:03:29 crc kubenswrapper[4820]: E0203 13:03:29.233289 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3c50c0fd07949195efc42e1fb3421bdba2aeddef21f0228553736636869594\": container with ID starting with 5b3c50c0fd07949195efc42e1fb3421bdba2aeddef21f0228553736636869594 not found: ID does not exist" containerID="5b3c50c0fd07949195efc42e1fb3421bdba2aeddef21f0228553736636869594" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.233368 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3c50c0fd07949195efc42e1fb3421bdba2aeddef21f0228553736636869594"} err="failed to get container status \"5b3c50c0fd07949195efc42e1fb3421bdba2aeddef21f0228553736636869594\": rpc error: code = NotFound desc = could not find container \"5b3c50c0fd07949195efc42e1fb3421bdba2aeddef21f0228553736636869594\": container with ID starting with 5b3c50c0fd07949195efc42e1fb3421bdba2aeddef21f0228553736636869594 not found: ID does not exist" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.270568 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e339651-119f-4c82-84aa-d6a981283fcb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e339651-119f-4c82-84aa-d6a981283fcb" (UID: "4e339651-119f-4c82-84aa-d6a981283fcb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.311162 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e339651-119f-4c82-84aa-d6a981283fcb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.351280 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zcjjt"] Feb 03 13:03:29 crc kubenswrapper[4820]: I0203 13:03:29.366305 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zcjjt"] Feb 03 13:03:31 crc kubenswrapper[4820]: I0203 13:03:31.156650 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e339651-119f-4c82-84aa-d6a981283fcb" path="/var/lib/kubelet/pods/4e339651-119f-4c82-84aa-d6a981283fcb/volumes" Feb 03 13:04:51 crc kubenswrapper[4820]: I0203 13:04:51.891226 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-shnjm"] Feb 03 13:04:51 crc kubenswrapper[4820]: E0203 13:04:51.892366 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e339651-119f-4c82-84aa-d6a981283fcb" containerName="registry-server" Feb 03 13:04:51 crc kubenswrapper[4820]: I0203 13:04:51.892391 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e339651-119f-4c82-84aa-d6a981283fcb" containerName="registry-server" Feb 03 13:04:51 crc kubenswrapper[4820]: E0203 13:04:51.892433 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e339651-119f-4c82-84aa-d6a981283fcb" containerName="extract-utilities" Feb 03 13:04:51 crc kubenswrapper[4820]: I0203 13:04:51.892441 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e339651-119f-4c82-84aa-d6a981283fcb" containerName="extract-utilities" Feb 03 13:04:51 crc kubenswrapper[4820]: E0203 13:04:51.892474 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e339651-119f-4c82-84aa-d6a981283fcb" containerName="extract-content" Feb 03 13:04:51 crc kubenswrapper[4820]: I0203 13:04:51.892483 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e339651-119f-4c82-84aa-d6a981283fcb" containerName="extract-content" Feb 03 13:04:51 crc kubenswrapper[4820]: I0203 13:04:51.892861 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e339651-119f-4c82-84aa-d6a981283fcb" containerName="registry-server" Feb 03 13:04:51 crc kubenswrapper[4820]: I0203 13:04:51.894923 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:04:51 crc kubenswrapper[4820]: I0203 13:04:51.907671 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shnjm"] Feb 03 13:04:51 crc kubenswrapper[4820]: I0203 13:04:51.938892 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mgz\" (UniqueName: \"kubernetes.io/projected/3eb05ccb-8400-4ac9-ad38-4da887039621-kube-api-access-w5mgz\") pod \"community-operators-shnjm\" (UID: \"3eb05ccb-8400-4ac9-ad38-4da887039621\") " pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:04:51 crc kubenswrapper[4820]: I0203 13:04:51.939019 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb05ccb-8400-4ac9-ad38-4da887039621-utilities\") pod \"community-operators-shnjm\" (UID: \"3eb05ccb-8400-4ac9-ad38-4da887039621\") " pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:04:51 crc kubenswrapper[4820]: I0203 13:04:51.939536 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb05ccb-8400-4ac9-ad38-4da887039621-catalog-content\") pod \"community-operators-shnjm\" (UID: \"3eb05ccb-8400-4ac9-ad38-4da887039621\") " pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:04:52 crc kubenswrapper[4820]: I0203 13:04:52.040864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb05ccb-8400-4ac9-ad38-4da887039621-catalog-content\") pod \"community-operators-shnjm\" (UID: \"3eb05ccb-8400-4ac9-ad38-4da887039621\") " pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:04:52 crc kubenswrapper[4820]: I0203 13:04:52.040969 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5mgz\" (UniqueName: \"kubernetes.io/projected/3eb05ccb-8400-4ac9-ad38-4da887039621-kube-api-access-w5mgz\") pod \"community-operators-shnjm\" (UID: \"3eb05ccb-8400-4ac9-ad38-4da887039621\") " pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:04:52 crc kubenswrapper[4820]: I0203 13:04:52.041012 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb05ccb-8400-4ac9-ad38-4da887039621-utilities\") pod \"community-operators-shnjm\" (UID: \"3eb05ccb-8400-4ac9-ad38-4da887039621\") " pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:04:52 crc kubenswrapper[4820]: I0203 13:04:52.041568 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb05ccb-8400-4ac9-ad38-4da887039621-catalog-content\") pod \"community-operators-shnjm\" (UID: \"3eb05ccb-8400-4ac9-ad38-4da887039621\") " pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:04:52 crc kubenswrapper[4820]: I0203 13:04:52.041649 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb05ccb-8400-4ac9-ad38-4da887039621-utilities\") pod \"community-operators-shnjm\" (UID: \"3eb05ccb-8400-4ac9-ad38-4da887039621\") " pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:04:52 crc kubenswrapper[4820]: I0203 13:04:52.066238 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5mgz\" (UniqueName: \"kubernetes.io/projected/3eb05ccb-8400-4ac9-ad38-4da887039621-kube-api-access-w5mgz\") pod \"community-operators-shnjm\" (UID: \"3eb05ccb-8400-4ac9-ad38-4da887039621\") " pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:04:52 crc kubenswrapper[4820]: I0203 13:04:52.218024 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:04:53 crc kubenswrapper[4820]: I0203 13:04:53.198529 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shnjm"] Feb 03 13:04:53 crc kubenswrapper[4820]: I0203 13:04:53.858470 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shnjm" event={"ID":"3eb05ccb-8400-4ac9-ad38-4da887039621","Type":"ContainerStarted","Data":"2287a5760c1106a462defa9beb3e1e845139be1aff413575de70a428cbe3e34f"} Feb 03 13:04:55 crc kubenswrapper[4820]: I0203 13:04:55.154969 4820 generic.go:334] "Generic (PLEG): container finished" podID="3eb05ccb-8400-4ac9-ad38-4da887039621" containerID="dc9716c063a0aca0cd5d6bfa1c549d6d08bd4ec41ae9ba63f9b3dfbba2068e8e" exitCode=0 Feb 03 13:04:55 crc kubenswrapper[4820]: I0203 13:04:55.174237 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shnjm" event={"ID":"3eb05ccb-8400-4ac9-ad38-4da887039621","Type":"ContainerDied","Data":"dc9716c063a0aca0cd5d6bfa1c549d6d08bd4ec41ae9ba63f9b3dfbba2068e8e"} Feb 03 13:04:57 crc kubenswrapper[4820]: I0203 13:04:57.185057 4820 generic.go:334] "Generic (PLEG): container finished" podID="3eb05ccb-8400-4ac9-ad38-4da887039621" containerID="aa657936b29246e650eb033455cec18b45ec303b4e77ed4580922b755aeaec6f" exitCode=0 Feb 03 13:04:57 crc kubenswrapper[4820]: I0203 13:04:57.185174 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shnjm" event={"ID":"3eb05ccb-8400-4ac9-ad38-4da887039621","Type":"ContainerDied","Data":"aa657936b29246e650eb033455cec18b45ec303b4e77ed4580922b755aeaec6f"} Feb 03 13:04:58 crc kubenswrapper[4820]: I0203 13:04:58.658298 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shnjm" event={"ID":"3eb05ccb-8400-4ac9-ad38-4da887039621","Type":"ContainerStarted","Data":"0b8cddbeb079d885a854f27f083e03ed85885f8ea5145aee210846f730799202"} Feb 03 13:04:58 crc kubenswrapper[4820]: I0203 13:04:58.695646 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-shnjm" podStartSLOduration=5.19370807 podStartE2EDuration="7.695621572s" podCreationTimestamp="2026-02-03 13:04:51 +0000 UTC" firstStartedPulling="2026-02-03 13:04:55.161058475 +0000 UTC m=+3612.684134339" lastFinishedPulling="2026-02-03 13:04:57.662971977 +0000 UTC m=+3615.186047841" observedRunningTime="2026-02-03 13:04:58.690545526 +0000 UTC m=+3616.213621410" watchObservedRunningTime="2026-02-03 13:04:58.695621572 +0000 UTC m=+3616.218697436" Feb 03 13:05:01 crc kubenswrapper[4820]: I0203 13:05:01.365988 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:05:01 crc kubenswrapper[4820]: I0203 13:05:01.366403 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:05:02 crc kubenswrapper[4820]: I0203 13:05:02.218355 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:05:02 crc kubenswrapper[4820]: I0203 13:05:02.218696 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:05:02 crc kubenswrapper[4820]: I0203 13:05:02.277906 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:05:02 crc kubenswrapper[4820]: I0203 13:05:02.865425 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:05:02 crc kubenswrapper[4820]: I0203 13:05:02.923294 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-shnjm"] Feb 03 13:05:05 crc kubenswrapper[4820]: I0203 13:05:05.083954 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-shnjm" podUID="3eb05ccb-8400-4ac9-ad38-4da887039621" containerName="registry-server" containerID="cri-o://0b8cddbeb079d885a854f27f083e03ed85885f8ea5145aee210846f730799202" gracePeriod=2 Feb 03 13:05:05 crc kubenswrapper[4820]: I0203 13:05:05.615702 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:05:05 crc kubenswrapper[4820]: I0203 13:05:05.657879 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5mgz\" (UniqueName: \"kubernetes.io/projected/3eb05ccb-8400-4ac9-ad38-4da887039621-kube-api-access-w5mgz\") pod \"3eb05ccb-8400-4ac9-ad38-4da887039621\" (UID: \"3eb05ccb-8400-4ac9-ad38-4da887039621\") " Feb 03 13:05:05 crc kubenswrapper[4820]: I0203 13:05:05.657975 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb05ccb-8400-4ac9-ad38-4da887039621-catalog-content\") pod \"3eb05ccb-8400-4ac9-ad38-4da887039621\" (UID: \"3eb05ccb-8400-4ac9-ad38-4da887039621\") " Feb 03 13:05:05 crc kubenswrapper[4820]: I0203 13:05:05.658176 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb05ccb-8400-4ac9-ad38-4da887039621-utilities\") pod \"3eb05ccb-8400-4ac9-ad38-4da887039621\" (UID: \"3eb05ccb-8400-4ac9-ad38-4da887039621\") " Feb 03 13:05:05 crc kubenswrapper[4820]: I0203 13:05:05.660181 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb05ccb-8400-4ac9-ad38-4da887039621-utilities" (OuterVolumeSpecName: "utilities") pod "3eb05ccb-8400-4ac9-ad38-4da887039621" (UID: "3eb05ccb-8400-4ac9-ad38-4da887039621"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:05 crc kubenswrapper[4820]: I0203 13:05:05.668237 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb05ccb-8400-4ac9-ad38-4da887039621-kube-api-access-w5mgz" (OuterVolumeSpecName: "kube-api-access-w5mgz") pod "3eb05ccb-8400-4ac9-ad38-4da887039621" (UID: "3eb05ccb-8400-4ac9-ad38-4da887039621"). InnerVolumeSpecName "kube-api-access-w5mgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:05:05 crc kubenswrapper[4820]: I0203 13:05:05.726569 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb05ccb-8400-4ac9-ad38-4da887039621-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3eb05ccb-8400-4ac9-ad38-4da887039621" (UID: "3eb05ccb-8400-4ac9-ad38-4da887039621"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:05 crc kubenswrapper[4820]: I0203 13:05:05.761850 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5mgz\" (UniqueName: \"kubernetes.io/projected/3eb05ccb-8400-4ac9-ad38-4da887039621-kube-api-access-w5mgz\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:05 crc kubenswrapper[4820]: I0203 13:05:05.761922 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb05ccb-8400-4ac9-ad38-4da887039621-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:05 crc kubenswrapper[4820]: I0203 13:05:05.761936 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb05ccb-8400-4ac9-ad38-4da887039621-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.094732 4820 generic.go:334] "Generic (PLEG): container finished" podID="3eb05ccb-8400-4ac9-ad38-4da887039621" containerID="0b8cddbeb079d885a854f27f083e03ed85885f8ea5145aee210846f730799202" exitCode=0 Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.094786 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shnjm" event={"ID":"3eb05ccb-8400-4ac9-ad38-4da887039621","Type":"ContainerDied","Data":"0b8cddbeb079d885a854f27f083e03ed85885f8ea5145aee210846f730799202"} Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.094814 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shnjm" Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.094832 4820 scope.go:117] "RemoveContainer" containerID="0b8cddbeb079d885a854f27f083e03ed85885f8ea5145aee210846f730799202" Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.094819 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shnjm" event={"ID":"3eb05ccb-8400-4ac9-ad38-4da887039621","Type":"ContainerDied","Data":"2287a5760c1106a462defa9beb3e1e845139be1aff413575de70a428cbe3e34f"} Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.132103 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-shnjm"] Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.137585 4820 scope.go:117] "RemoveContainer" containerID="aa657936b29246e650eb033455cec18b45ec303b4e77ed4580922b755aeaec6f" Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.142730 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-shnjm"] Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.179243 4820 scope.go:117] "RemoveContainer" containerID="dc9716c063a0aca0cd5d6bfa1c549d6d08bd4ec41ae9ba63f9b3dfbba2068e8e" Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.219371 4820 scope.go:117] "RemoveContainer" containerID="0b8cddbeb079d885a854f27f083e03ed85885f8ea5145aee210846f730799202" Feb 03 13:05:06 crc kubenswrapper[4820]: E0203 13:05:06.220141 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b8cddbeb079d885a854f27f083e03ed85885f8ea5145aee210846f730799202\": container with ID starting with 0b8cddbeb079d885a854f27f083e03ed85885f8ea5145aee210846f730799202 not found: ID does not exist" containerID="0b8cddbeb079d885a854f27f083e03ed85885f8ea5145aee210846f730799202" Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.220191 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8cddbeb079d885a854f27f083e03ed85885f8ea5145aee210846f730799202"} err="failed to get container status \"0b8cddbeb079d885a854f27f083e03ed85885f8ea5145aee210846f730799202\": rpc error: code = NotFound desc = could not find container \"0b8cddbeb079d885a854f27f083e03ed85885f8ea5145aee210846f730799202\": container with ID starting with 0b8cddbeb079d885a854f27f083e03ed85885f8ea5145aee210846f730799202 not found: ID does not exist" Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.220223 4820 scope.go:117] "RemoveContainer" containerID="aa657936b29246e650eb033455cec18b45ec303b4e77ed4580922b755aeaec6f" Feb 03 13:05:06 crc kubenswrapper[4820]: E0203 13:05:06.220771 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa657936b29246e650eb033455cec18b45ec303b4e77ed4580922b755aeaec6f\": container with ID starting with aa657936b29246e650eb033455cec18b45ec303b4e77ed4580922b755aeaec6f not found: ID does not exist" containerID="aa657936b29246e650eb033455cec18b45ec303b4e77ed4580922b755aeaec6f" Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.220825 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa657936b29246e650eb033455cec18b45ec303b4e77ed4580922b755aeaec6f"} err="failed to get container status \"aa657936b29246e650eb033455cec18b45ec303b4e77ed4580922b755aeaec6f\": rpc error: code = NotFound desc = could not find container \"aa657936b29246e650eb033455cec18b45ec303b4e77ed4580922b755aeaec6f\": container with ID starting with aa657936b29246e650eb033455cec18b45ec303b4e77ed4580922b755aeaec6f not found: ID does not exist" Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.220868 4820 scope.go:117] "RemoveContainer" containerID="dc9716c063a0aca0cd5d6bfa1c549d6d08bd4ec41ae9ba63f9b3dfbba2068e8e" Feb 03 13:05:06 crc kubenswrapper[4820]: E0203 13:05:06.221151 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc9716c063a0aca0cd5d6bfa1c549d6d08bd4ec41ae9ba63f9b3dfbba2068e8e\": container with ID starting with dc9716c063a0aca0cd5d6bfa1c549d6d08bd4ec41ae9ba63f9b3dfbba2068e8e not found: ID does not exist" containerID="dc9716c063a0aca0cd5d6bfa1c549d6d08bd4ec41ae9ba63f9b3dfbba2068e8e" Feb 03 13:05:06 crc kubenswrapper[4820]: I0203 13:05:06.221174 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc9716c063a0aca0cd5d6bfa1c549d6d08bd4ec41ae9ba63f9b3dfbba2068e8e"} err="failed to get container status \"dc9716c063a0aca0cd5d6bfa1c549d6d08bd4ec41ae9ba63f9b3dfbba2068e8e\": rpc error: code = NotFound desc = could not find container \"dc9716c063a0aca0cd5d6bfa1c549d6d08bd4ec41ae9ba63f9b3dfbba2068e8e\": container with ID starting with dc9716c063a0aca0cd5d6bfa1c549d6d08bd4ec41ae9ba63f9b3dfbba2068e8e not found: ID does not exist" Feb 03 13:05:07 crc kubenswrapper[4820]: I0203 13:05:07.281013 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb05ccb-8400-4ac9-ad38-4da887039621" path="/var/lib/kubelet/pods/3eb05ccb-8400-4ac9-ad38-4da887039621/volumes" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.527234 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cb5fj"] Feb 03 13:05:17 crc kubenswrapper[4820]: E0203 13:05:17.528074 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb05ccb-8400-4ac9-ad38-4da887039621" containerName="extract-content" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.528092 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb05ccb-8400-4ac9-ad38-4da887039621" containerName="extract-content" Feb 03 13:05:17 crc kubenswrapper[4820]: E0203 13:05:17.528104 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb05ccb-8400-4ac9-ad38-4da887039621" containerName="registry-server" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.528110 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb05ccb-8400-4ac9-ad38-4da887039621" containerName="registry-server" Feb 03 13:05:17 crc kubenswrapper[4820]: E0203 13:05:17.528121 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb05ccb-8400-4ac9-ad38-4da887039621" containerName="extract-utilities" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.528129 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb05ccb-8400-4ac9-ad38-4da887039621" containerName="extract-utilities" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.528358 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb05ccb-8400-4ac9-ad38-4da887039621" containerName="registry-server" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.529752 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.553040 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cb5fj"] Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.681340 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6tr\" (UniqueName: \"kubernetes.io/projected/e7c8f7af-adb5-4678-a815-5400f356a76c-kube-api-access-gm6tr\") pod \"certified-operators-cb5fj\" (UID: \"e7c8f7af-adb5-4678-a815-5400f356a76c\") " pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.681702 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c8f7af-adb5-4678-a815-5400f356a76c-catalog-content\") pod \"certified-operators-cb5fj\" (UID: \"e7c8f7af-adb5-4678-a815-5400f356a76c\") " pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.681852 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c8f7af-adb5-4678-a815-5400f356a76c-utilities\") pod \"certified-operators-cb5fj\" (UID: \"e7c8f7af-adb5-4678-a815-5400f356a76c\") " pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.784582 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c8f7af-adb5-4678-a815-5400f356a76c-catalog-content\") pod \"certified-operators-cb5fj\" (UID: \"e7c8f7af-adb5-4678-a815-5400f356a76c\") " pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.784738 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c8f7af-adb5-4678-a815-5400f356a76c-utilities\") pod \"certified-operators-cb5fj\" (UID: \"e7c8f7af-adb5-4678-a815-5400f356a76c\") " pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.784871 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm6tr\" (UniqueName: \"kubernetes.io/projected/e7c8f7af-adb5-4678-a815-5400f356a76c-kube-api-access-gm6tr\") pod \"certified-operators-cb5fj\" (UID: \"e7c8f7af-adb5-4678-a815-5400f356a76c\") " pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.785325 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c8f7af-adb5-4678-a815-5400f356a76c-catalog-content\") pod \"certified-operators-cb5fj\" (UID: \"e7c8f7af-adb5-4678-a815-5400f356a76c\") " pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.785438 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c8f7af-adb5-4678-a815-5400f356a76c-utilities\") pod \"certified-operators-cb5fj\" (UID: \"e7c8f7af-adb5-4678-a815-5400f356a76c\") " pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.808389 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm6tr\" (UniqueName: \"kubernetes.io/projected/e7c8f7af-adb5-4678-a815-5400f356a76c-kube-api-access-gm6tr\") pod \"certified-operators-cb5fj\" (UID: \"e7c8f7af-adb5-4678-a815-5400f356a76c\") " pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:17 crc kubenswrapper[4820]: I0203 13:05:17.854104 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:18 crc kubenswrapper[4820]: I0203 13:05:18.585906 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cb5fj"] Feb 03 13:05:18 crc kubenswrapper[4820]: I0203 13:05:18.846328 4820 generic.go:334] "Generic (PLEG): container finished" podID="e7c8f7af-adb5-4678-a815-5400f356a76c" containerID="f3e661f8410dacc95a38bc3f110d74362d606327eed247b8f4ba3e218f3742ca" exitCode=0 Feb 03 13:05:18 crc kubenswrapper[4820]: I0203 13:05:18.846398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cb5fj" event={"ID":"e7c8f7af-adb5-4678-a815-5400f356a76c","Type":"ContainerDied","Data":"f3e661f8410dacc95a38bc3f110d74362d606327eed247b8f4ba3e218f3742ca"} Feb 03 13:05:18 crc kubenswrapper[4820]: I0203 13:05:18.846475 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cb5fj" event={"ID":"e7c8f7af-adb5-4678-a815-5400f356a76c","Type":"ContainerStarted","Data":"4f6d0d5abc3ca41e88fc5a2284ca20714102993081902fb1b263810909d4b65c"} Feb 03 13:05:19 crc kubenswrapper[4820]: I0203 13:05:19.859725 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cb5fj" event={"ID":"e7c8f7af-adb5-4678-a815-5400f356a76c","Type":"ContainerStarted","Data":"0dc9e2c9a7195822c1c64b186ca8947d6d29578ac4f0cdb54e688a441e552cc9"} Feb 03 13:05:21 crc kubenswrapper[4820]: I0203 13:05:21.979666 4820 generic.go:334] "Generic (PLEG): container finished" podID="e7c8f7af-adb5-4678-a815-5400f356a76c" containerID="0dc9e2c9a7195822c1c64b186ca8947d6d29578ac4f0cdb54e688a441e552cc9" exitCode=0 Feb 03 13:05:21 crc kubenswrapper[4820]: I0203 13:05:21.980247 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cb5fj" event={"ID":"e7c8f7af-adb5-4678-a815-5400f356a76c","Type":"ContainerDied","Data":"0dc9e2c9a7195822c1c64b186ca8947d6d29578ac4f0cdb54e688a441e552cc9"} Feb 03 13:05:22 crc kubenswrapper[4820]: I0203 13:05:22.994037 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cb5fj" event={"ID":"e7c8f7af-adb5-4678-a815-5400f356a76c","Type":"ContainerStarted","Data":"262c853b7299c58b1d3e655bbe77004d50f1a5713e6dd94e72321dbab123e487"} Feb 03 13:05:23 crc kubenswrapper[4820]: I0203 13:05:23.016878 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cb5fj" podStartSLOduration=2.329179855 podStartE2EDuration="6.01685847s" podCreationTimestamp="2026-02-03 13:05:17 +0000 UTC" firstStartedPulling="2026-02-03 13:05:18.848861893 +0000 UTC m=+3636.371937757" lastFinishedPulling="2026-02-03 13:05:22.536540508 +0000 UTC m=+3640.059616372" observedRunningTime="2026-02-03 13:05:23.011812656 +0000 UTC m=+3640.534888540" watchObservedRunningTime="2026-02-03 13:05:23.01685847 +0000 UTC m=+3640.539934334" Feb 03 13:05:27 crc kubenswrapper[4820]: I0203 13:05:27.855115 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:27 crc kubenswrapper[4820]: I0203 13:05:27.855648 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:27 crc kubenswrapper[4820]: I0203 13:05:27.914547 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:28 crc kubenswrapper[4820]: I0203 13:05:28.098864 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:29 crc kubenswrapper[4820]: I0203 13:05:29.305281 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cb5fj"] Feb 03 13:05:30 crc kubenswrapper[4820]: I0203 13:05:30.091415 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cb5fj" podUID="e7c8f7af-adb5-4678-a815-5400f356a76c" containerName="registry-server" containerID="cri-o://262c853b7299c58b1d3e655bbe77004d50f1a5713e6dd94e72321dbab123e487" gracePeriod=2 Feb 03 13:05:30 crc kubenswrapper[4820]: I0203 13:05:30.837312 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:30 crc kubenswrapper[4820]: I0203 13:05:30.887107 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm6tr\" (UniqueName: \"kubernetes.io/projected/e7c8f7af-adb5-4678-a815-5400f356a76c-kube-api-access-gm6tr\") pod \"e7c8f7af-adb5-4678-a815-5400f356a76c\" (UID: \"e7c8f7af-adb5-4678-a815-5400f356a76c\") " Feb 03 13:05:30 crc kubenswrapper[4820]: I0203 13:05:30.887289 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c8f7af-adb5-4678-a815-5400f356a76c-utilities\") pod \"e7c8f7af-adb5-4678-a815-5400f356a76c\" (UID: \"e7c8f7af-adb5-4678-a815-5400f356a76c\") " Feb 03 13:05:30 crc kubenswrapper[4820]: I0203 13:05:30.887576 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c8f7af-adb5-4678-a815-5400f356a76c-catalog-content\") pod \"e7c8f7af-adb5-4678-a815-5400f356a76c\" (UID: \"e7c8f7af-adb5-4678-a815-5400f356a76c\") " Feb 03 13:05:30 crc kubenswrapper[4820]: I0203 13:05:30.888295 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7c8f7af-adb5-4678-a815-5400f356a76c-utilities" (OuterVolumeSpecName: "utilities") pod "e7c8f7af-adb5-4678-a815-5400f356a76c" (UID: "e7c8f7af-adb5-4678-a815-5400f356a76c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4820]: I0203 13:05:30.899361 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c8f7af-adb5-4678-a815-5400f356a76c-kube-api-access-gm6tr" (OuterVolumeSpecName: "kube-api-access-gm6tr") pod "e7c8f7af-adb5-4678-a815-5400f356a76c" (UID: "e7c8f7af-adb5-4678-a815-5400f356a76c"). InnerVolumeSpecName "kube-api-access-gm6tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4820]: I0203 13:05:30.989268 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm6tr\" (UniqueName: \"kubernetes.io/projected/e7c8f7af-adb5-4678-a815-5400f356a76c-kube-api-access-gm6tr\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4820]: I0203 13:05:30.989579 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c8f7af-adb5-4678-a815-5400f356a76c-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.103763 4820 generic.go:334] "Generic (PLEG): container finished" podID="e7c8f7af-adb5-4678-a815-5400f356a76c" containerID="262c853b7299c58b1d3e655bbe77004d50f1a5713e6dd94e72321dbab123e487" exitCode=0 Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.103825 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cb5fj" Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.103827 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cb5fj" event={"ID":"e7c8f7af-adb5-4678-a815-5400f356a76c","Type":"ContainerDied","Data":"262c853b7299c58b1d3e655bbe77004d50f1a5713e6dd94e72321dbab123e487"} Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.104001 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cb5fj" event={"ID":"e7c8f7af-adb5-4678-a815-5400f356a76c","Type":"ContainerDied","Data":"4f6d0d5abc3ca41e88fc5a2284ca20714102993081902fb1b263810909d4b65c"} Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.104031 4820 scope.go:117] "RemoveContainer" containerID="262c853b7299c58b1d3e655bbe77004d50f1a5713e6dd94e72321dbab123e487" Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.108880 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7c8f7af-adb5-4678-a815-5400f356a76c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7c8f7af-adb5-4678-a815-5400f356a76c" (UID: "e7c8f7af-adb5-4678-a815-5400f356a76c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.129055 4820 scope.go:117] "RemoveContainer" containerID="0dc9e2c9a7195822c1c64b186ca8947d6d29578ac4f0cdb54e688a441e552cc9" Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.154210 4820 scope.go:117] "RemoveContainer" containerID="f3e661f8410dacc95a38bc3f110d74362d606327eed247b8f4ba3e218f3742ca" Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.192203 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c8f7af-adb5-4678-a815-5400f356a76c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.224583 4820 scope.go:117] "RemoveContainer" containerID="262c853b7299c58b1d3e655bbe77004d50f1a5713e6dd94e72321dbab123e487" Feb 03 13:05:31 crc kubenswrapper[4820]: E0203 13:05:31.225549 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"262c853b7299c58b1d3e655bbe77004d50f1a5713e6dd94e72321dbab123e487\": container with ID starting with 262c853b7299c58b1d3e655bbe77004d50f1a5713e6dd94e72321dbab123e487 not found: ID does not exist" containerID="262c853b7299c58b1d3e655bbe77004d50f1a5713e6dd94e72321dbab123e487" Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.225590 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"262c853b7299c58b1d3e655bbe77004d50f1a5713e6dd94e72321dbab123e487"} err="failed to get container status \"262c853b7299c58b1d3e655bbe77004d50f1a5713e6dd94e72321dbab123e487\": rpc error: code = NotFound desc = could not find container \"262c853b7299c58b1d3e655bbe77004d50f1a5713e6dd94e72321dbab123e487\": container with ID starting with 262c853b7299c58b1d3e655bbe77004d50f1a5713e6dd94e72321dbab123e487 not found: ID does not exist" Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.225620 4820 scope.go:117] "RemoveContainer" containerID="0dc9e2c9a7195822c1c64b186ca8947d6d29578ac4f0cdb54e688a441e552cc9" Feb 03 13:05:31 crc kubenswrapper[4820]: E0203 13:05:31.226132 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc9e2c9a7195822c1c64b186ca8947d6d29578ac4f0cdb54e688a441e552cc9\": container with ID starting with 0dc9e2c9a7195822c1c64b186ca8947d6d29578ac4f0cdb54e688a441e552cc9 not found: ID does not exist" containerID="0dc9e2c9a7195822c1c64b186ca8947d6d29578ac4f0cdb54e688a441e552cc9" Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.226170 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc9e2c9a7195822c1c64b186ca8947d6d29578ac4f0cdb54e688a441e552cc9"} err="failed to get container status \"0dc9e2c9a7195822c1c64b186ca8947d6d29578ac4f0cdb54e688a441e552cc9\": rpc error: code = NotFound desc = could not find container \"0dc9e2c9a7195822c1c64b186ca8947d6d29578ac4f0cdb54e688a441e552cc9\": container with ID starting with 0dc9e2c9a7195822c1c64b186ca8947d6d29578ac4f0cdb54e688a441e552cc9 not found: ID does not exist" Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.226192 4820 scope.go:117] "RemoveContainer" containerID="f3e661f8410dacc95a38bc3f110d74362d606327eed247b8f4ba3e218f3742ca" Feb 03 13:05:31 crc kubenswrapper[4820]: E0203 13:05:31.226519 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3e661f8410dacc95a38bc3f110d74362d606327eed247b8f4ba3e218f3742ca\": container with ID starting with f3e661f8410dacc95a38bc3f110d74362d606327eed247b8f4ba3e218f3742ca not found: ID does not exist" containerID="f3e661f8410dacc95a38bc3f110d74362d606327eed247b8f4ba3e218f3742ca" Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.226584 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3e661f8410dacc95a38bc3f110d74362d606327eed247b8f4ba3e218f3742ca"} err="failed to get container status \"f3e661f8410dacc95a38bc3f110d74362d606327eed247b8f4ba3e218f3742ca\": rpc error: code = NotFound desc = could not find container \"f3e661f8410dacc95a38bc3f110d74362d606327eed247b8f4ba3e218f3742ca\": container with ID starting with f3e661f8410dacc95a38bc3f110d74362d606327eed247b8f4ba3e218f3742ca not found: ID does not exist" Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.366482 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.366553 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.434594 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cb5fj"] Feb 03 13:05:31 crc kubenswrapper[4820]: I0203 13:05:31.445074 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cb5fj"] Feb 03 13:05:33 crc kubenswrapper[4820]: I0203 13:05:33.156796 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7c8f7af-adb5-4678-a815-5400f356a76c" path="/var/lib/kubelet/pods/e7c8f7af-adb5-4678-a815-5400f356a76c/volumes" Feb 03 13:06:01 crc kubenswrapper[4820]: I0203 13:06:01.365595 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:06:01 crc kubenswrapper[4820]: I0203 13:06:01.366145 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4820]: I0203 13:06:01.366208 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 13:06:01 crc kubenswrapper[4820]: I0203 13:06:01.367025 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:06:01 crc kubenswrapper[4820]: I0203 13:06:01.367081 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" gracePeriod=600 Feb 03 13:06:01 crc kubenswrapper[4820]: E0203 13:06:01.502194 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:06:01 crc kubenswrapper[4820]: I0203 13:06:01.739597 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" exitCode=0 Feb 03 13:06:01 crc kubenswrapper[4820]: I0203 13:06:01.739643 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187"} Feb 03 13:06:01 crc kubenswrapper[4820]: I0203 13:06:01.739679 4820 scope.go:117] "RemoveContainer" containerID="6360892bd392241289886290481623c9bd92bace474c07410253fed83ab05298" Feb 03 13:06:01 crc kubenswrapper[4820]: I0203 13:06:01.740585 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:06:01 crc kubenswrapper[4820]: E0203 13:06:01.741070 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:06:17 crc kubenswrapper[4820]: I0203 13:06:17.143172 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:06:17 crc kubenswrapper[4820]: E0203 13:06:17.143987 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:06:28 crc kubenswrapper[4820]: I0203 13:06:28.143042 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:06:28 crc kubenswrapper[4820]: E0203 13:06:28.144016 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:06:41 crc kubenswrapper[4820]: I0203 13:06:41.142381 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:06:41 crc kubenswrapper[4820]: E0203 13:06:41.143206 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:06:56 crc kubenswrapper[4820]: I0203 13:06:56.142970 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:06:56 crc kubenswrapper[4820]: E0203 13:06:56.143743 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:07:10 crc kubenswrapper[4820]: I0203 13:07:10.143406 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:07:10 crc kubenswrapper[4820]: E0203 13:07:10.144230 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:07:23 crc kubenswrapper[4820]: I0203 13:07:23.153630 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:07:23 crc kubenswrapper[4820]: E0203 13:07:23.154418 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:07:38 crc kubenswrapper[4820]: I0203 13:07:38.143107 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:07:38 crc kubenswrapper[4820]: E0203 13:07:38.143915 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:07:49 crc kubenswrapper[4820]: I0203 13:07:49.143240 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:07:49 crc kubenswrapper[4820]: E0203 13:07:49.144227 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:07:52 crc kubenswrapper[4820]: I0203 13:07:52.703268 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sq7tx"] Feb 03 13:07:52 crc kubenswrapper[4820]: E0203 13:07:52.705467 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c8f7af-adb5-4678-a815-5400f356a76c" containerName="registry-server" Feb 03 13:07:52 crc kubenswrapper[4820]: I0203 13:07:52.705505 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c8f7af-adb5-4678-a815-5400f356a76c" containerName="registry-server" Feb 03 13:07:52 crc kubenswrapper[4820]: E0203 13:07:52.705531 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c8f7af-adb5-4678-a815-5400f356a76c" containerName="extract-utilities" Feb 03 13:07:52 crc kubenswrapper[4820]: I0203 13:07:52.705544 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c8f7af-adb5-4678-a815-5400f356a76c" containerName="extract-utilities" Feb 03 13:07:52 crc kubenswrapper[4820]: E0203 13:07:52.705602 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c8f7af-adb5-4678-a815-5400f356a76c" containerName="extract-content" Feb 03 13:07:52 crc kubenswrapper[4820]: I0203 13:07:52.705612 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c8f7af-adb5-4678-a815-5400f356a76c" containerName="extract-content" Feb 03 13:07:52 crc kubenswrapper[4820]: I0203 13:07:52.705967 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c8f7af-adb5-4678-a815-5400f356a76c" containerName="registry-server" Feb 03 13:07:52 crc kubenswrapper[4820]: I0203 13:07:52.707923 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:07:52 crc kubenswrapper[4820]: I0203 13:07:52.714906 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sq7tx"] Feb 03 13:07:52 crc kubenswrapper[4820]: I0203 13:07:52.799447 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb597b7-f476-412f-994b-d28267283ea9-utilities\") pod \"redhat-marketplace-sq7tx\" (UID: \"fbb597b7-f476-412f-994b-d28267283ea9\") " pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:07:52 crc kubenswrapper[4820]: I0203 13:07:52.799491 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7sz5\" (UniqueName: \"kubernetes.io/projected/fbb597b7-f476-412f-994b-d28267283ea9-kube-api-access-k7sz5\") pod \"redhat-marketplace-sq7tx\" (UID: \"fbb597b7-f476-412f-994b-d28267283ea9\") " pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:07:52 crc kubenswrapper[4820]: I0203 13:07:52.799542 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb597b7-f476-412f-994b-d28267283ea9-catalog-content\") pod \"redhat-marketplace-sq7tx\" (UID: \"fbb597b7-f476-412f-994b-d28267283ea9\") " pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:07:53 crc kubenswrapper[4820]: I0203 13:07:53.086187 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb597b7-f476-412f-994b-d28267283ea9-utilities\") pod \"redhat-marketplace-sq7tx\" (UID: \"fbb597b7-f476-412f-994b-d28267283ea9\") " pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:07:53 crc kubenswrapper[4820]: I0203 13:07:53.086236 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7sz5\" (UniqueName: \"kubernetes.io/projected/fbb597b7-f476-412f-994b-d28267283ea9-kube-api-access-k7sz5\") pod \"redhat-marketplace-sq7tx\" (UID: \"fbb597b7-f476-412f-994b-d28267283ea9\") " pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:07:53 crc kubenswrapper[4820]: I0203 13:07:53.086296 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb597b7-f476-412f-994b-d28267283ea9-catalog-content\") pod \"redhat-marketplace-sq7tx\" (UID: \"fbb597b7-f476-412f-994b-d28267283ea9\") " pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:07:53 crc kubenswrapper[4820]: I0203 13:07:53.086948 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb597b7-f476-412f-994b-d28267283ea9-catalog-content\") pod \"redhat-marketplace-sq7tx\" (UID: \"fbb597b7-f476-412f-994b-d28267283ea9\") " pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:07:53 crc kubenswrapper[4820]: I0203 13:07:53.087059 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb597b7-f476-412f-994b-d28267283ea9-utilities\") pod \"redhat-marketplace-sq7tx\" (UID: \"fbb597b7-f476-412f-994b-d28267283ea9\") " pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:07:53 crc kubenswrapper[4820]: I0203 13:07:53.113333 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7sz5\" (UniqueName: \"kubernetes.io/projected/fbb597b7-f476-412f-994b-d28267283ea9-kube-api-access-k7sz5\") pod \"redhat-marketplace-sq7tx\" (UID: \"fbb597b7-f476-412f-994b-d28267283ea9\") " pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:07:53 crc kubenswrapper[4820]: I0203 13:07:53.330412 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:07:53 crc kubenswrapper[4820]: I0203 13:07:53.822236 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sq7tx"] Feb 03 13:07:54 crc kubenswrapper[4820]: I0203 13:07:54.528125 4820 generic.go:334] "Generic (PLEG): container finished" podID="fbb597b7-f476-412f-994b-d28267283ea9" containerID="edb90366eda80535d1feec287e6c6a9801bae4ad008198ea115e58e365bb2f26" exitCode=0 Feb 03 13:07:54 crc kubenswrapper[4820]: I0203 13:07:54.528233 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sq7tx" event={"ID":"fbb597b7-f476-412f-994b-d28267283ea9","Type":"ContainerDied","Data":"edb90366eda80535d1feec287e6c6a9801bae4ad008198ea115e58e365bb2f26"} Feb 03 13:07:54 crc kubenswrapper[4820]: I0203 13:07:54.528470 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sq7tx" event={"ID":"fbb597b7-f476-412f-994b-d28267283ea9","Type":"ContainerStarted","Data":"f48ed8a41f9a03fb79d1f0d05628e304f97e78a4eb21e5fff9ee5dc84bd64291"} Feb 03 13:07:54 crc kubenswrapper[4820]: I0203 13:07:54.532104 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:07:55 crc kubenswrapper[4820]: I0203 13:07:55.539113 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sq7tx" event={"ID":"fbb597b7-f476-412f-994b-d28267283ea9","Type":"ContainerStarted","Data":"2e27a7d3df040e50c0af916e6c5a0dec5594d35101366c8ca0ba328ed69c70c2"} Feb 03 13:07:56 crc kubenswrapper[4820]: I0203 13:07:56.553706 4820 generic.go:334] "Generic (PLEG): container finished" podID="fbb597b7-f476-412f-994b-d28267283ea9" containerID="2e27a7d3df040e50c0af916e6c5a0dec5594d35101366c8ca0ba328ed69c70c2" exitCode=0 Feb 03 13:07:56 crc kubenswrapper[4820]: I0203 13:07:56.553791 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sq7tx" event={"ID":"fbb597b7-f476-412f-994b-d28267283ea9","Type":"ContainerDied","Data":"2e27a7d3df040e50c0af916e6c5a0dec5594d35101366c8ca0ba328ed69c70c2"} Feb 03 13:07:57 crc kubenswrapper[4820]: I0203 13:07:57.569717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sq7tx" event={"ID":"fbb597b7-f476-412f-994b-d28267283ea9","Type":"ContainerStarted","Data":"0a496dbe6a1d78a8f471db0678246504d76ab0a23b20461ac6db7947bf2883b7"} Feb 03 13:07:57 crc kubenswrapper[4820]: I0203 13:07:57.596133 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sq7tx" podStartSLOduration=3.023771203 podStartE2EDuration="5.59607852s" podCreationTimestamp="2026-02-03 13:07:52 +0000 UTC" firstStartedPulling="2026-02-03 13:07:54.531342005 +0000 UTC m=+3792.054417879" lastFinishedPulling="2026-02-03 13:07:57.103649332 +0000 UTC m=+3794.626725196" observedRunningTime="2026-02-03 13:07:57.591163918 +0000 UTC m=+3795.114239782" watchObservedRunningTime="2026-02-03 13:07:57.59607852 +0000 UTC m=+3795.119154384" Feb 03 13:08:03 crc kubenswrapper[4820]: I0203 13:08:03.330912 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:08:03 crc kubenswrapper[4820]: I0203 13:08:03.331491 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:08:03 crc kubenswrapper[4820]: I0203 13:08:03.406798 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:08:03 crc kubenswrapper[4820]: I0203 13:08:03.671472 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:08:03 crc kubenswrapper[4820]: I0203 13:08:03.722342 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sq7tx"] Feb 03 13:08:04 crc kubenswrapper[4820]: I0203 13:08:04.142549 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:08:04 crc kubenswrapper[4820]: E0203 13:08:04.142791 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:08:05 crc kubenswrapper[4820]: I0203 13:08:05.642660 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sq7tx" podUID="fbb597b7-f476-412f-994b-d28267283ea9" containerName="registry-server" containerID="cri-o://0a496dbe6a1d78a8f471db0678246504d76ab0a23b20461ac6db7947bf2883b7" gracePeriod=2 Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.145942 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.278660 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb597b7-f476-412f-994b-d28267283ea9-catalog-content\") pod \"fbb597b7-f476-412f-994b-d28267283ea9\" (UID: \"fbb597b7-f476-412f-994b-d28267283ea9\") " Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.280031 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7sz5\" (UniqueName: \"kubernetes.io/projected/fbb597b7-f476-412f-994b-d28267283ea9-kube-api-access-k7sz5\") pod \"fbb597b7-f476-412f-994b-d28267283ea9\" (UID: \"fbb597b7-f476-412f-994b-d28267283ea9\") " Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.281051 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb597b7-f476-412f-994b-d28267283ea9-utilities\") pod \"fbb597b7-f476-412f-994b-d28267283ea9\" (UID: \"fbb597b7-f476-412f-994b-d28267283ea9\") " Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.282467 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb597b7-f476-412f-994b-d28267283ea9-utilities" (OuterVolumeSpecName: "utilities") pod "fbb597b7-f476-412f-994b-d28267283ea9" (UID: "fbb597b7-f476-412f-994b-d28267283ea9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.283228 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb597b7-f476-412f-994b-d28267283ea9-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.293080 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb597b7-f476-412f-994b-d28267283ea9-kube-api-access-k7sz5" (OuterVolumeSpecName: "kube-api-access-k7sz5") pod "fbb597b7-f476-412f-994b-d28267283ea9" (UID: "fbb597b7-f476-412f-994b-d28267283ea9"). InnerVolumeSpecName "kube-api-access-k7sz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.303983 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb597b7-f476-412f-994b-d28267283ea9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbb597b7-f476-412f-994b-d28267283ea9" (UID: "fbb597b7-f476-412f-994b-d28267283ea9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.387358 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb597b7-f476-412f-994b-d28267283ea9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.387405 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7sz5\" (UniqueName: \"kubernetes.io/projected/fbb597b7-f476-412f-994b-d28267283ea9-kube-api-access-k7sz5\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.654129 4820 generic.go:334] "Generic (PLEG): container finished" podID="fbb597b7-f476-412f-994b-d28267283ea9" containerID="0a496dbe6a1d78a8f471db0678246504d76ab0a23b20461ac6db7947bf2883b7" exitCode=0 Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.654174 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sq7tx" event={"ID":"fbb597b7-f476-412f-994b-d28267283ea9","Type":"ContainerDied","Data":"0a496dbe6a1d78a8f471db0678246504d76ab0a23b20461ac6db7947bf2883b7"} Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.654189 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sq7tx" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.654209 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sq7tx" event={"ID":"fbb597b7-f476-412f-994b-d28267283ea9","Type":"ContainerDied","Data":"f48ed8a41f9a03fb79d1f0d05628e304f97e78a4eb21e5fff9ee5dc84bd64291"} Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.654230 4820 scope.go:117] "RemoveContainer" containerID="0a496dbe6a1d78a8f471db0678246504d76ab0a23b20461ac6db7947bf2883b7" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.690621 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sq7tx"] Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.694586 4820 scope.go:117] "RemoveContainer" containerID="2e27a7d3df040e50c0af916e6c5a0dec5594d35101366c8ca0ba328ed69c70c2" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.705322 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sq7tx"] Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.721846 4820 scope.go:117] "RemoveContainer" containerID="edb90366eda80535d1feec287e6c6a9801bae4ad008198ea115e58e365bb2f26" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.775577 4820 scope.go:117] "RemoveContainer" containerID="0a496dbe6a1d78a8f471db0678246504d76ab0a23b20461ac6db7947bf2883b7" Feb 03 13:08:06 crc kubenswrapper[4820]: E0203 13:08:06.776352 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a496dbe6a1d78a8f471db0678246504d76ab0a23b20461ac6db7947bf2883b7\": container with ID starting with 0a496dbe6a1d78a8f471db0678246504d76ab0a23b20461ac6db7947bf2883b7 not found: ID does not exist" containerID="0a496dbe6a1d78a8f471db0678246504d76ab0a23b20461ac6db7947bf2883b7" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.776437 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a496dbe6a1d78a8f471db0678246504d76ab0a23b20461ac6db7947bf2883b7"} err="failed to get container status \"0a496dbe6a1d78a8f471db0678246504d76ab0a23b20461ac6db7947bf2883b7\": rpc error: code = NotFound desc = could not find container \"0a496dbe6a1d78a8f471db0678246504d76ab0a23b20461ac6db7947bf2883b7\": container with ID starting with 0a496dbe6a1d78a8f471db0678246504d76ab0a23b20461ac6db7947bf2883b7 not found: ID does not exist" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.776473 4820 scope.go:117] "RemoveContainer" containerID="2e27a7d3df040e50c0af916e6c5a0dec5594d35101366c8ca0ba328ed69c70c2" Feb 03 13:08:06 crc kubenswrapper[4820]: E0203 13:08:06.776795 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e27a7d3df040e50c0af916e6c5a0dec5594d35101366c8ca0ba328ed69c70c2\": container with ID starting with 2e27a7d3df040e50c0af916e6c5a0dec5594d35101366c8ca0ba328ed69c70c2 not found: ID does not exist" containerID="2e27a7d3df040e50c0af916e6c5a0dec5594d35101366c8ca0ba328ed69c70c2" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.776837 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e27a7d3df040e50c0af916e6c5a0dec5594d35101366c8ca0ba328ed69c70c2"} err="failed to get container status \"2e27a7d3df040e50c0af916e6c5a0dec5594d35101366c8ca0ba328ed69c70c2\": rpc error: code = NotFound desc = could not find container \"2e27a7d3df040e50c0af916e6c5a0dec5594d35101366c8ca0ba328ed69c70c2\": container with ID starting with 2e27a7d3df040e50c0af916e6c5a0dec5594d35101366c8ca0ba328ed69c70c2 not found: ID does not exist" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.776863 4820 scope.go:117] "RemoveContainer" containerID="edb90366eda80535d1feec287e6c6a9801bae4ad008198ea115e58e365bb2f26" Feb 03 13:08:06 crc kubenswrapper[4820]: E0203 13:08:06.777266 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edb90366eda80535d1feec287e6c6a9801bae4ad008198ea115e58e365bb2f26\": container with ID starting with edb90366eda80535d1feec287e6c6a9801bae4ad008198ea115e58e365bb2f26 not found: ID does not exist" containerID="edb90366eda80535d1feec287e6c6a9801bae4ad008198ea115e58e365bb2f26" Feb 03 13:08:06 crc kubenswrapper[4820]: I0203 13:08:06.777291 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edb90366eda80535d1feec287e6c6a9801bae4ad008198ea115e58e365bb2f26"} err="failed to get container status \"edb90366eda80535d1feec287e6c6a9801bae4ad008198ea115e58e365bb2f26\": rpc error: code = NotFound desc = could not find container \"edb90366eda80535d1feec287e6c6a9801bae4ad008198ea115e58e365bb2f26\": container with ID starting with edb90366eda80535d1feec287e6c6a9801bae4ad008198ea115e58e365bb2f26 not found: ID does not exist" Feb 03 13:08:07 crc kubenswrapper[4820]: I0203 13:08:07.156845 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb597b7-f476-412f-994b-d28267283ea9" path="/var/lib/kubelet/pods/fbb597b7-f476-412f-994b-d28267283ea9/volumes" Feb 03 13:08:15 crc kubenswrapper[4820]: I0203 13:08:15.142811 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:08:15 crc kubenswrapper[4820]: E0203 13:08:15.144735 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:08:27 crc kubenswrapper[4820]: I0203 13:08:27.142851 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:08:27 crc kubenswrapper[4820]: E0203 13:08:27.143783 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:08:41 crc kubenswrapper[4820]: I0203 13:08:41.143935 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:08:41 crc kubenswrapper[4820]: E0203 13:08:41.144714 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:08:54 crc kubenswrapper[4820]: I0203 13:08:54.349750 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:08:54 crc kubenswrapper[4820]: E0203 13:08:54.350941 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:09:06 crc kubenswrapper[4820]: I0203 13:09:06.143138 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:09:06 crc kubenswrapper[4820]: E0203 13:09:06.143929 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:09:20 crc kubenswrapper[4820]: I0203 13:09:20.144072 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:09:20 crc kubenswrapper[4820]: E0203 13:09:20.145549 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:09:35 crc kubenswrapper[4820]: I0203 13:09:35.143065 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:09:35 crc kubenswrapper[4820]: E0203 13:09:35.144070 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:09:47 crc kubenswrapper[4820]: I0203 13:09:47.142593 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:09:47 crc kubenswrapper[4820]: E0203 13:09:47.143233 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:09:58 crc kubenswrapper[4820]: I0203 13:09:58.142484 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:09:58 crc kubenswrapper[4820]: E0203 13:09:58.143463 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:10:13 crc kubenswrapper[4820]: I0203 13:10:13.160748 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:10:13 crc kubenswrapper[4820]: E0203 13:10:13.161598 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:10:27 crc kubenswrapper[4820]: I0203 13:10:27.143130 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:10:27 crc kubenswrapper[4820]: E0203 13:10:27.143900 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:10:41 crc kubenswrapper[4820]: I0203 13:10:41.144301 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:10:41 crc kubenswrapper[4820]: E0203 13:10:41.145254 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:10:52 crc kubenswrapper[4820]: I0203 13:10:52.142734 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:10:52 crc kubenswrapper[4820]: E0203 13:10:52.143553 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:11:05 crc kubenswrapper[4820]: I0203 13:11:05.143105 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:11:06 crc kubenswrapper[4820]: I0203 13:11:06.310806 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"f5152e771cae7873613e52c0fe6409fb9c277d69a21d89b557616fcf979b6606"} Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.298835 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cvs64"] Feb 03 13:13:08 crc kubenswrapper[4820]: E0203 13:13:08.300007 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb597b7-f476-412f-994b-d28267283ea9" containerName="registry-server" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.300031 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb597b7-f476-412f-994b-d28267283ea9" containerName="registry-server" Feb 03 13:13:08 crc kubenswrapper[4820]: E0203 13:13:08.300065 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb597b7-f476-412f-994b-d28267283ea9" containerName="extract-content" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.300074 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb597b7-f476-412f-994b-d28267283ea9" containerName="extract-content" Feb 03 13:13:08 crc kubenswrapper[4820]: E0203 13:13:08.300097 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb597b7-f476-412f-994b-d28267283ea9" containerName="extract-utilities" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.300106 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb597b7-f476-412f-994b-d28267283ea9" containerName="extract-utilities" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.300768 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb597b7-f476-412f-994b-d28267283ea9" containerName="registry-server" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.302740 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.330621 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cvs64"] Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.347326 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4679b70-6d4c-47db-96eb-0bc13e2469d8-catalog-content\") pod \"redhat-operators-cvs64\" (UID: \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\") " pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.347397 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrr6v\" (UniqueName: \"kubernetes.io/projected/c4679b70-6d4c-47db-96eb-0bc13e2469d8-kube-api-access-wrr6v\") pod \"redhat-operators-cvs64\" (UID: \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\") " pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.347539 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4679b70-6d4c-47db-96eb-0bc13e2469d8-utilities\") pod \"redhat-operators-cvs64\" (UID: \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\") " pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.450136 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4679b70-6d4c-47db-96eb-0bc13e2469d8-catalog-content\") pod \"redhat-operators-cvs64\" (UID: \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\") " pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.450493 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrr6v\" (UniqueName: \"kubernetes.io/projected/c4679b70-6d4c-47db-96eb-0bc13e2469d8-kube-api-access-wrr6v\") pod \"redhat-operators-cvs64\" (UID: \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\") " pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.450772 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4679b70-6d4c-47db-96eb-0bc13e2469d8-utilities\") pod \"redhat-operators-cvs64\" (UID: \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\") " pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.451058 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4679b70-6d4c-47db-96eb-0bc13e2469d8-catalog-content\") pod \"redhat-operators-cvs64\" (UID: \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\") " pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.451192 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4679b70-6d4c-47db-96eb-0bc13e2469d8-utilities\") pod \"redhat-operators-cvs64\" (UID: \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\") " pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.480049 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrr6v\" (UniqueName: \"kubernetes.io/projected/c4679b70-6d4c-47db-96eb-0bc13e2469d8-kube-api-access-wrr6v\") pod \"redhat-operators-cvs64\" (UID: \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\") " pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:13:08 crc kubenswrapper[4820]: I0203 13:13:08.624735 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:13:09 crc kubenswrapper[4820]: I0203 13:13:09.156327 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cvs64"] Feb 03 13:13:09 crc kubenswrapper[4820]: I0203 13:13:09.967786 4820 generic.go:334] "Generic (PLEG): container finished" podID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" containerID="8d616b37cb751a0a9724811c0a0044210042561998968dd175619fdd9813e094" exitCode=0 Feb 03 13:13:09 crc kubenswrapper[4820]: I0203 13:13:09.967861 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvs64" event={"ID":"c4679b70-6d4c-47db-96eb-0bc13e2469d8","Type":"ContainerDied","Data":"8d616b37cb751a0a9724811c0a0044210042561998968dd175619fdd9813e094"} Feb 03 13:13:09 crc kubenswrapper[4820]: I0203 13:13:09.967924 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvs64" event={"ID":"c4679b70-6d4c-47db-96eb-0bc13e2469d8","Type":"ContainerStarted","Data":"e57a6f1c79b2071a21204dba39c0f1a9b12a5bbcb6be2c279766df742aa311f0"} Feb 03 13:13:09 crc kubenswrapper[4820]: I0203 13:13:09.971127 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:13:31 crc kubenswrapper[4820]: I0203 13:13:31.365818 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:13:31 crc kubenswrapper[4820]: I0203 13:13:31.366331 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:13:45 crc kubenswrapper[4820]: E0203 13:13:45.636095 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 03 13:13:45 crc kubenswrapper[4820]: E0203 13:13:45.636720 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrr6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cvs64_openshift-marketplace(c4679b70-6d4c-47db-96eb-0bc13e2469d8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 13:13:45 crc kubenswrapper[4820]: E0203 13:13:45.638057 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-cvs64" podUID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" Feb 03 13:13:45 crc kubenswrapper[4820]: E0203 13:13:45.773533 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cvs64" podUID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" Feb 03 13:14:00 crc kubenswrapper[4820]: I0203 13:14:00.085616 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvs64" event={"ID":"c4679b70-6d4c-47db-96eb-0bc13e2469d8","Type":"ContainerStarted","Data":"31eeecf46c591cf1d3c4f703c9c4683d2d3b02ee5a94103ee723f6782a032b11"} Feb 03 13:14:01 crc kubenswrapper[4820]: I0203 13:14:01.366395 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:14:01 crc kubenswrapper[4820]: I0203 13:14:01.366503 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:14:02 crc kubenswrapper[4820]: I0203 13:14:02.108389 4820 generic.go:334] "Generic (PLEG): container finished" podID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" containerID="31eeecf46c591cf1d3c4f703c9c4683d2d3b02ee5a94103ee723f6782a032b11" exitCode=0 Feb 03 13:14:02 crc kubenswrapper[4820]: I0203 13:14:02.108482 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvs64" event={"ID":"c4679b70-6d4c-47db-96eb-0bc13e2469d8","Type":"ContainerDied","Data":"31eeecf46c591cf1d3c4f703c9c4683d2d3b02ee5a94103ee723f6782a032b11"} Feb 03 13:14:04 crc kubenswrapper[4820]: I0203 13:14:04.136204 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvs64" event={"ID":"c4679b70-6d4c-47db-96eb-0bc13e2469d8","Type":"ContainerStarted","Data":"c2b39da05e8caa4a353eea9c162813256e6fac719e2a111814944772c66f27b0"} Feb 03 13:14:04 crc kubenswrapper[4820]: I0203 13:14:04.172654 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cvs64" podStartSLOduration=2.779534574 podStartE2EDuration="56.17252165s" podCreationTimestamp="2026-02-03 13:13:08 +0000 UTC" firstStartedPulling="2026-02-03 13:13:09.970766266 +0000 UTC m=+4107.493842120" lastFinishedPulling="2026-02-03 13:14:03.363753292 +0000 UTC m=+4160.886829196" observedRunningTime="2026-02-03 13:14:04.159880734 +0000 UTC m=+4161.682956618" watchObservedRunningTime="2026-02-03 13:14:04.17252165 +0000 UTC m=+4161.695597514" Feb 03 13:14:08 crc kubenswrapper[4820]: I0203 13:14:08.625925 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:14:08 crc kubenswrapper[4820]: I0203 13:14:08.626612 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:14:09 crc kubenswrapper[4820]: I0203 13:14:09.679724 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cvs64" podUID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" containerName="registry-server" probeResult="failure" output=< Feb 03 13:14:09 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 13:14:09 crc kubenswrapper[4820]: > Feb 03 13:14:18 crc kubenswrapper[4820]: I0203 13:14:18.690760 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:14:18 crc kubenswrapper[4820]: I0203 13:14:18.747444 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:14:18 crc kubenswrapper[4820]: I0203 13:14:18.856746 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cvs64"] Feb 03 13:14:18 crc kubenswrapper[4820]: I0203 13:14:18.937739 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-864d5"] Feb 03 13:14:18 crc kubenswrapper[4820]: I0203 13:14:18.938142 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-864d5" podUID="bfe48ea3-13a9-476b-9906-9c98aae0604c" containerName="registry-server" containerID="cri-o://d8a2d29baa9249ff13ad1753d5867f064975cd54a5bcb9b331dc252d4cf7cbad" gracePeriod=2 Feb 03 13:14:19 crc kubenswrapper[4820]: I0203 13:14:19.309249 4820 generic.go:334] "Generic (PLEG): container finished" podID="bfe48ea3-13a9-476b-9906-9c98aae0604c" containerID="d8a2d29baa9249ff13ad1753d5867f064975cd54a5bcb9b331dc252d4cf7cbad" exitCode=0 Feb 03 13:14:19 crc kubenswrapper[4820]: I0203 13:14:19.309383 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-864d5" event={"ID":"bfe48ea3-13a9-476b-9906-9c98aae0604c","Type":"ContainerDied","Data":"d8a2d29baa9249ff13ad1753d5867f064975cd54a5bcb9b331dc252d4cf7cbad"} Feb 03 13:14:19 crc kubenswrapper[4820]: I0203 13:14:19.503060 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-864d5" Feb 03 13:14:19 crc kubenswrapper[4820]: I0203 13:14:19.536745 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95swq\" (UniqueName: \"kubernetes.io/projected/bfe48ea3-13a9-476b-9906-9c98aae0604c-kube-api-access-95swq\") pod \"bfe48ea3-13a9-476b-9906-9c98aae0604c\" (UID: \"bfe48ea3-13a9-476b-9906-9c98aae0604c\") " Feb 03 13:14:19 crc kubenswrapper[4820]: I0203 13:14:19.537297 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe48ea3-13a9-476b-9906-9c98aae0604c-utilities\") pod \"bfe48ea3-13a9-476b-9906-9c98aae0604c\" (UID: \"bfe48ea3-13a9-476b-9906-9c98aae0604c\") " Feb 03 13:14:19 crc kubenswrapper[4820]: I0203 13:14:19.537572 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe48ea3-13a9-476b-9906-9c98aae0604c-catalog-content\") pod \"bfe48ea3-13a9-476b-9906-9c98aae0604c\" (UID: \"bfe48ea3-13a9-476b-9906-9c98aae0604c\") " Feb 03 13:14:19 crc kubenswrapper[4820]: I0203 13:14:19.543470 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe48ea3-13a9-476b-9906-9c98aae0604c-utilities" (OuterVolumeSpecName: "utilities") pod "bfe48ea3-13a9-476b-9906-9c98aae0604c" (UID: "bfe48ea3-13a9-476b-9906-9c98aae0604c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:14:19 crc kubenswrapper[4820]: I0203 13:14:19.554920 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe48ea3-13a9-476b-9906-9c98aae0604c-kube-api-access-95swq" (OuterVolumeSpecName: "kube-api-access-95swq") pod "bfe48ea3-13a9-476b-9906-9c98aae0604c" (UID: "bfe48ea3-13a9-476b-9906-9c98aae0604c"). InnerVolumeSpecName "kube-api-access-95swq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:14:20 crc kubenswrapper[4820]: I0203 13:14:20.006036 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe48ea3-13a9-476b-9906-9c98aae0604c-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:14:20 crc kubenswrapper[4820]: I0203 13:14:20.006248 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95swq\" (UniqueName: \"kubernetes.io/projected/bfe48ea3-13a9-476b-9906-9c98aae0604c-kube-api-access-95swq\") on node \"crc\" DevicePath \"\"" Feb 03 13:14:20 crc kubenswrapper[4820]: I0203 13:14:20.184481 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe48ea3-13a9-476b-9906-9c98aae0604c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfe48ea3-13a9-476b-9906-9c98aae0604c" (UID: "bfe48ea3-13a9-476b-9906-9c98aae0604c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:14:20 crc kubenswrapper[4820]: I0203 13:14:20.212759 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe48ea3-13a9-476b-9906-9c98aae0604c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:14:20 crc kubenswrapper[4820]: I0203 13:14:20.326575 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-864d5" event={"ID":"bfe48ea3-13a9-476b-9906-9c98aae0604c","Type":"ContainerDied","Data":"02f1f1879aab04372157ac7f70e5e567ed008e582c2b57166e6bae9074f8b56f"} Feb 03 13:14:20 crc kubenswrapper[4820]: I0203 13:14:20.326609 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-864d5" Feb 03 13:14:20 crc kubenswrapper[4820]: I0203 13:14:20.326663 4820 scope.go:117] "RemoveContainer" containerID="d8a2d29baa9249ff13ad1753d5867f064975cd54a5bcb9b331dc252d4cf7cbad" Feb 03 13:14:20 crc kubenswrapper[4820]: I0203 13:14:20.379327 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-864d5"] Feb 03 13:14:20 crc kubenswrapper[4820]: I0203 13:14:20.386726 4820 scope.go:117] "RemoveContainer" containerID="788e9291dbccb903ad11f7618b2679a978d05e7158c76a7623331888d4d8d632" Feb 03 13:14:20 crc kubenswrapper[4820]: I0203 13:14:20.388122 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-864d5"] Feb 03 13:14:20 crc kubenswrapper[4820]: I0203 13:14:20.430309 4820 scope.go:117] "RemoveContainer" containerID="e1197927532153fe42fd679d1d1c8608b15aba2c492fe84f1ed1842cbd6f1836" Feb 03 13:14:21 crc kubenswrapper[4820]: I0203 13:14:21.156508 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfe48ea3-13a9-476b-9906-9c98aae0604c" path="/var/lib/kubelet/pods/bfe48ea3-13a9-476b-9906-9c98aae0604c/volumes" Feb 03 13:14:31 crc kubenswrapper[4820]: I0203 13:14:31.365847 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:14:31 crc kubenswrapper[4820]: I0203 13:14:31.366492 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:14:31 crc kubenswrapper[4820]: I0203 13:14:31.366551 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 13:14:31 crc kubenswrapper[4820]: I0203 13:14:31.367631 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5152e771cae7873613e52c0fe6409fb9c277d69a21d89b557616fcf979b6606"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:14:31 crc kubenswrapper[4820]: I0203 13:14:31.367702 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://f5152e771cae7873613e52c0fe6409fb9c277d69a21d89b557616fcf979b6606" gracePeriod=600 Feb 03 13:14:32 crc kubenswrapper[4820]: I0203 13:14:32.467375 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="f5152e771cae7873613e52c0fe6409fb9c277d69a21d89b557616fcf979b6606" exitCode=0 Feb 03 13:14:32 crc kubenswrapper[4820]: I0203 13:14:32.467472 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"f5152e771cae7873613e52c0fe6409fb9c277d69a21d89b557616fcf979b6606"} Feb 03 13:14:32 crc kubenswrapper[4820]: I0203 13:14:32.467972 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376"} Feb 03 13:14:32 crc kubenswrapper[4820]: I0203 13:14:32.467998 4820 scope.go:117] "RemoveContainer" containerID="bc515a23be524fcd005e7330d6085c1912ede4a0e028795898e112821e86b187" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.199816 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466"] Feb 03 13:15:00 crc kubenswrapper[4820]: E0203 13:15:00.200743 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe48ea3-13a9-476b-9906-9c98aae0604c" containerName="extract-utilities" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.200769 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe48ea3-13a9-476b-9906-9c98aae0604c" containerName="extract-utilities" Feb 03 13:15:00 crc kubenswrapper[4820]: E0203 13:15:00.200791 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe48ea3-13a9-476b-9906-9c98aae0604c" containerName="registry-server" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.200797 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe48ea3-13a9-476b-9906-9c98aae0604c" containerName="registry-server" Feb 03 13:15:00 crc kubenswrapper[4820]: E0203 13:15:00.200823 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe48ea3-13a9-476b-9906-9c98aae0604c" containerName="extract-content" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.200829 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe48ea3-13a9-476b-9906-9c98aae0604c" containerName="extract-content" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.201076 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe48ea3-13a9-476b-9906-9c98aae0604c" containerName="registry-server" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.201870 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.206725 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.208768 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.215164 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466"] Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.242371 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt9lj\" (UniqueName: \"kubernetes.io/projected/32f686e0-eb63-47b2-8fc5-2acad2c32dab-kube-api-access-rt9lj\") pod \"collect-profiles-29502075-wj466\" (UID: \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.242703 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32f686e0-eb63-47b2-8fc5-2acad2c32dab-secret-volume\") pod \"collect-profiles-29502075-wj466\" (UID: \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.242976 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32f686e0-eb63-47b2-8fc5-2acad2c32dab-config-volume\") pod \"collect-profiles-29502075-wj466\" (UID: \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.353998 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32f686e0-eb63-47b2-8fc5-2acad2c32dab-config-volume\") pod \"collect-profiles-29502075-wj466\" (UID: \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.354314 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt9lj\" (UniqueName: \"kubernetes.io/projected/32f686e0-eb63-47b2-8fc5-2acad2c32dab-kube-api-access-rt9lj\") pod \"collect-profiles-29502075-wj466\" (UID: \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.354415 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32f686e0-eb63-47b2-8fc5-2acad2c32dab-secret-volume\") pod \"collect-profiles-29502075-wj466\" (UID: \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.363449 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32f686e0-eb63-47b2-8fc5-2acad2c32dab-secret-volume\") pod \"collect-profiles-29502075-wj466\" (UID: \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.363510 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32f686e0-eb63-47b2-8fc5-2acad2c32dab-config-volume\") pod \"collect-profiles-29502075-wj466\" (UID: \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.383472 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt9lj\" (UniqueName: \"kubernetes.io/projected/32f686e0-eb63-47b2-8fc5-2acad2c32dab-kube-api-access-rt9lj\") pod \"collect-profiles-29502075-wj466\" (UID: \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" Feb 03 13:15:00 crc kubenswrapper[4820]: I0203 13:15:00.536926 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" Feb 03 13:15:01 crc kubenswrapper[4820]: I0203 13:15:01.458064 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466"] Feb 03 13:15:02 crc kubenswrapper[4820]: I0203 13:15:02.393803 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" event={"ID":"32f686e0-eb63-47b2-8fc5-2acad2c32dab","Type":"ContainerStarted","Data":"9ac7d22648e146e47553ea0717456fe6c676c9787f3f0540c88c96a9d3cde8bd"} Feb 03 13:15:02 crc kubenswrapper[4820]: I0203 13:15:02.394187 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" event={"ID":"32f686e0-eb63-47b2-8fc5-2acad2c32dab","Type":"ContainerStarted","Data":"3ae14c371ab70cc4c29aeba9ac32224896da41215a2318951695a42d14ba6e59"} Feb 03 13:15:02 crc kubenswrapper[4820]: I0203 13:15:02.416014 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" podStartSLOduration=2.415991601 podStartE2EDuration="2.415991601s" podCreationTimestamp="2026-02-03 13:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:15:02.414157263 +0000 UTC m=+4219.937233127" watchObservedRunningTime="2026-02-03 13:15:02.415991601 +0000 UTC m=+4219.939067465" Feb 03 13:15:04 crc kubenswrapper[4820]: I0203 13:15:04.442627 4820 generic.go:334] "Generic (PLEG): container finished" podID="32f686e0-eb63-47b2-8fc5-2acad2c32dab" containerID="9ac7d22648e146e47553ea0717456fe6c676c9787f3f0540c88c96a9d3cde8bd" exitCode=0 Feb 03 13:15:04 crc kubenswrapper[4820]: I0203 13:15:04.442749 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" event={"ID":"32f686e0-eb63-47b2-8fc5-2acad2c32dab","Type":"ContainerDied","Data":"9ac7d22648e146e47553ea0717456fe6c676c9787f3f0540c88c96a9d3cde8bd"} Feb 03 13:15:05 crc kubenswrapper[4820]: I0203 13:15:05.817410 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" Feb 03 13:15:05 crc kubenswrapper[4820]: I0203 13:15:05.967817 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32f686e0-eb63-47b2-8fc5-2acad2c32dab-secret-volume\") pod \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\" (UID: \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\") " Feb 03 13:15:05 crc kubenswrapper[4820]: I0203 13:15:05.968114 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt9lj\" (UniqueName: \"kubernetes.io/projected/32f686e0-eb63-47b2-8fc5-2acad2c32dab-kube-api-access-rt9lj\") pod \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\" (UID: \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\") " Feb 03 13:15:05 crc kubenswrapper[4820]: I0203 13:15:05.968315 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32f686e0-eb63-47b2-8fc5-2acad2c32dab-config-volume\") pod \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\" (UID: \"32f686e0-eb63-47b2-8fc5-2acad2c32dab\") " Feb 03 13:15:05 crc kubenswrapper[4820]: I0203 13:15:05.968926 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32f686e0-eb63-47b2-8fc5-2acad2c32dab-config-volume" (OuterVolumeSpecName: "config-volume") pod "32f686e0-eb63-47b2-8fc5-2acad2c32dab" (UID: "32f686e0-eb63-47b2-8fc5-2acad2c32dab"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:15:05 crc kubenswrapper[4820]: I0203 13:15:05.969138 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32f686e0-eb63-47b2-8fc5-2acad2c32dab-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:05 crc kubenswrapper[4820]: I0203 13:15:05.974279 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32f686e0-eb63-47b2-8fc5-2acad2c32dab-kube-api-access-rt9lj" (OuterVolumeSpecName: "kube-api-access-rt9lj") pod "32f686e0-eb63-47b2-8fc5-2acad2c32dab" (UID: "32f686e0-eb63-47b2-8fc5-2acad2c32dab"). InnerVolumeSpecName "kube-api-access-rt9lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:15:05 crc kubenswrapper[4820]: I0203 13:15:05.974432 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f686e0-eb63-47b2-8fc5-2acad2c32dab-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "32f686e0-eb63-47b2-8fc5-2acad2c32dab" (UID: "32f686e0-eb63-47b2-8fc5-2acad2c32dab"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:15:06 crc kubenswrapper[4820]: I0203 13:15:06.072064 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32f686e0-eb63-47b2-8fc5-2acad2c32dab-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:06 crc kubenswrapper[4820]: I0203 13:15:06.072137 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt9lj\" (UniqueName: \"kubernetes.io/projected/32f686e0-eb63-47b2-8fc5-2acad2c32dab-kube-api-access-rt9lj\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:06 crc kubenswrapper[4820]: I0203 13:15:06.607514 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" event={"ID":"32f686e0-eb63-47b2-8fc5-2acad2c32dab","Type":"ContainerDied","Data":"3ae14c371ab70cc4c29aeba9ac32224896da41215a2318951695a42d14ba6e59"} Feb 03 13:15:06 crc kubenswrapper[4820]: I0203 13:15:06.607581 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ae14c371ab70cc4c29aeba9ac32224896da41215a2318951695a42d14ba6e59" Feb 03 13:15:06 crc kubenswrapper[4820]: I0203 13:15:06.607656 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466" Feb 03 13:15:06 crc kubenswrapper[4820]: I0203 13:15:06.667021 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf"] Feb 03 13:15:06 crc kubenswrapper[4820]: I0203 13:15:06.677123 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502030-q2tlf"] Feb 03 13:15:07 crc kubenswrapper[4820]: I0203 13:15:07.158737 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daf546c3-f063-47ae-8ab1-d9ee325ebae9" path="/var/lib/kubelet/pods/daf546c3-f063-47ae-8ab1-d9ee325ebae9/volumes" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.468215 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4dfd4"] Feb 03 13:15:31 crc kubenswrapper[4820]: E0203 13:15:31.469249 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f686e0-eb63-47b2-8fc5-2acad2c32dab" containerName="collect-profiles" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.469267 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f686e0-eb63-47b2-8fc5-2acad2c32dab" containerName="collect-profiles" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.469520 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="32f686e0-eb63-47b2-8fc5-2acad2c32dab" containerName="collect-profiles" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.471131 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.496435 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-catalog-content\") pod \"certified-operators-4dfd4\" (UID: \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\") " pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.496596 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc66d\" (UniqueName: \"kubernetes.io/projected/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-kube-api-access-zc66d\") pod \"certified-operators-4dfd4\" (UID: \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\") " pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.496667 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-utilities\") pod \"certified-operators-4dfd4\" (UID: \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\") " pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.501924 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4dfd4"] Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.600046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-catalog-content\") pod \"certified-operators-4dfd4\" (UID: \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\") " pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.600199 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc66d\" (UniqueName: \"kubernetes.io/projected/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-kube-api-access-zc66d\") pod \"certified-operators-4dfd4\" (UID: \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\") " pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.600257 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-utilities\") pod \"certified-operators-4dfd4\" (UID: \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\") " pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.600753 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-utilities\") pod \"certified-operators-4dfd4\" (UID: \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\") " pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.601039 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-catalog-content\") pod \"certified-operators-4dfd4\" (UID: \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\") " pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.611910 4820 scope.go:117] "RemoveContainer" containerID="b204e150f2c7bc3f9c89ce24e71e2e3ccf127e220c69d16114ac279fd5ba17e5" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.630818 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc66d\" (UniqueName: \"kubernetes.io/projected/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-kube-api-access-zc66d\") pod \"certified-operators-4dfd4\" (UID: \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\") " pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.670435 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pl75z"] Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.673363 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.688003 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pl75z"] Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.797513 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.804374 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2hm4\" (UniqueName: \"kubernetes.io/projected/424dd878-90cb-48c9-897b-0fb45d37a08f-kube-api-access-v2hm4\") pod \"community-operators-pl75z\" (UID: \"424dd878-90cb-48c9-897b-0fb45d37a08f\") " pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.804463 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424dd878-90cb-48c9-897b-0fb45d37a08f-utilities\") pod \"community-operators-pl75z\" (UID: \"424dd878-90cb-48c9-897b-0fb45d37a08f\") " pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:31 crc kubenswrapper[4820]: I0203 13:15:31.804736 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424dd878-90cb-48c9-897b-0fb45d37a08f-catalog-content\") pod \"community-operators-pl75z\" (UID: \"424dd878-90cb-48c9-897b-0fb45d37a08f\") " pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:32 crc kubenswrapper[4820]: I0203 13:15:32.076512 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2hm4\" (UniqueName: \"kubernetes.io/projected/424dd878-90cb-48c9-897b-0fb45d37a08f-kube-api-access-v2hm4\") pod \"community-operators-pl75z\" (UID: \"424dd878-90cb-48c9-897b-0fb45d37a08f\") " pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:32 crc kubenswrapper[4820]: I0203 13:15:32.076558 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424dd878-90cb-48c9-897b-0fb45d37a08f-utilities\") pod \"community-operators-pl75z\" (UID: \"424dd878-90cb-48c9-897b-0fb45d37a08f\") " pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:32 crc kubenswrapper[4820]: I0203 13:15:32.076578 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424dd878-90cb-48c9-897b-0fb45d37a08f-catalog-content\") pod \"community-operators-pl75z\" (UID: \"424dd878-90cb-48c9-897b-0fb45d37a08f\") " pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:32 crc kubenswrapper[4820]: I0203 13:15:32.077139 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424dd878-90cb-48c9-897b-0fb45d37a08f-catalog-content\") pod \"community-operators-pl75z\" (UID: \"424dd878-90cb-48c9-897b-0fb45d37a08f\") " pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:32 crc kubenswrapper[4820]: I0203 13:15:32.077219 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424dd878-90cb-48c9-897b-0fb45d37a08f-utilities\") pod \"community-operators-pl75z\" (UID: \"424dd878-90cb-48c9-897b-0fb45d37a08f\") " pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:32 crc kubenswrapper[4820]: I0203 13:15:32.107100 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2hm4\" (UniqueName: \"kubernetes.io/projected/424dd878-90cb-48c9-897b-0fb45d37a08f-kube-api-access-v2hm4\") pod \"community-operators-pl75z\" (UID: \"424dd878-90cb-48c9-897b-0fb45d37a08f\") " pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:32 crc kubenswrapper[4820]: I0203 13:15:32.351854 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:32 crc kubenswrapper[4820]: I0203 13:15:32.638765 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4dfd4"] Feb 03 13:15:32 crc kubenswrapper[4820]: W0203 13:15:32.918549 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod424dd878_90cb_48c9_897b_0fb45d37a08f.slice/crio-754143216c785bfa70f64eb1739a9ff4467445dfaeac4085ea5a96e331882715 WatchSource:0}: Error finding container 754143216c785bfa70f64eb1739a9ff4467445dfaeac4085ea5a96e331882715: Status 404 returned error can't find the container with id 754143216c785bfa70f64eb1739a9ff4467445dfaeac4085ea5a96e331882715 Feb 03 13:15:32 crc kubenswrapper[4820]: I0203 13:15:32.920217 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pl75z"] Feb 03 13:15:33 crc kubenswrapper[4820]: I0203 13:15:33.514523 4820 generic.go:334] "Generic (PLEG): container finished" podID="73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" containerID="af872cd68963d42c1c78a51c3ec457ddabab66bc3eab954f30fcab96855d7911" exitCode=0 Feb 03 13:15:33 crc kubenswrapper[4820]: I0203 13:15:33.514732 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dfd4" event={"ID":"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d","Type":"ContainerDied","Data":"af872cd68963d42c1c78a51c3ec457ddabab66bc3eab954f30fcab96855d7911"} Feb 03 13:15:33 crc kubenswrapper[4820]: I0203 13:15:33.514998 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dfd4" event={"ID":"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d","Type":"ContainerStarted","Data":"57dd96be175437a017ef90884267c0b77c2fd711f3dee2ef2054bac726ef2b04"} Feb 03 13:15:33 crc kubenswrapper[4820]: I0203 13:15:33.517512 4820 generic.go:334] "Generic (PLEG): container finished" podID="424dd878-90cb-48c9-897b-0fb45d37a08f" containerID="e4fea22cdcdeb67ad74c8b434af1ef21dad56fae01a126f04a62081eb556f928" exitCode=0 Feb 03 13:15:33 crc kubenswrapper[4820]: I0203 13:15:33.517565 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl75z" event={"ID":"424dd878-90cb-48c9-897b-0fb45d37a08f","Type":"ContainerDied","Data":"e4fea22cdcdeb67ad74c8b434af1ef21dad56fae01a126f04a62081eb556f928"} Feb 03 13:15:33 crc kubenswrapper[4820]: I0203 13:15:33.517611 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl75z" event={"ID":"424dd878-90cb-48c9-897b-0fb45d37a08f","Type":"ContainerStarted","Data":"754143216c785bfa70f64eb1739a9ff4467445dfaeac4085ea5a96e331882715"} Feb 03 13:15:35 crc kubenswrapper[4820]: I0203 13:15:35.538254 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dfd4" event={"ID":"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d","Type":"ContainerStarted","Data":"74b61fa292ba8e997e50b940fbfaa59d7219f0b0bcca8dc5f41c4b4b44008244"} Feb 03 13:15:35 crc kubenswrapper[4820]: I0203 13:15:35.541167 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl75z" event={"ID":"424dd878-90cb-48c9-897b-0fb45d37a08f","Type":"ContainerStarted","Data":"8010fb14bdeff27e912aeec426011b32c0f2f0d35cc10c20d6122a27bcc6cb61"} Feb 03 13:15:39 crc kubenswrapper[4820]: I0203 13:15:39.968695 4820 generic.go:334] "Generic (PLEG): container finished" podID="73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" containerID="74b61fa292ba8e997e50b940fbfaa59d7219f0b0bcca8dc5f41c4b4b44008244" exitCode=0 Feb 03 13:15:39 crc kubenswrapper[4820]: I0203 13:15:39.968768 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dfd4" event={"ID":"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d","Type":"ContainerDied","Data":"74b61fa292ba8e997e50b940fbfaa59d7219f0b0bcca8dc5f41c4b4b44008244"} Feb 03 13:15:39 crc kubenswrapper[4820]: I0203 13:15:39.971894 4820 generic.go:334] "Generic (PLEG): container finished" podID="424dd878-90cb-48c9-897b-0fb45d37a08f" containerID="8010fb14bdeff27e912aeec426011b32c0f2f0d35cc10c20d6122a27bcc6cb61" exitCode=0 Feb 03 13:15:39 crc kubenswrapper[4820]: I0203 13:15:39.971932 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl75z" event={"ID":"424dd878-90cb-48c9-897b-0fb45d37a08f","Type":"ContainerDied","Data":"8010fb14bdeff27e912aeec426011b32c0f2f0d35cc10c20d6122a27bcc6cb61"} Feb 03 13:15:42 crc kubenswrapper[4820]: I0203 13:15:42.053670 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dfd4" event={"ID":"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d","Type":"ContainerStarted","Data":"d5ac765e15b14832570652ac6df0d2f4afebbc2df991323686d42169ac2cbb4c"} Feb 03 13:15:42 crc kubenswrapper[4820]: I0203 13:15:42.057825 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl75z" event={"ID":"424dd878-90cb-48c9-897b-0fb45d37a08f","Type":"ContainerStarted","Data":"c4ca2f691cfe8d0af8774838f9bb98a3841d10f824593aca25d11ef99473783f"} Feb 03 13:15:42 crc kubenswrapper[4820]: I0203 13:15:42.091564 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4dfd4" podStartSLOduration=3.773607078 podStartE2EDuration="11.091531408s" podCreationTimestamp="2026-02-03 13:15:31 +0000 UTC" firstStartedPulling="2026-02-03 13:15:33.51763496 +0000 UTC m=+4251.040710824" lastFinishedPulling="2026-02-03 13:15:40.83555929 +0000 UTC m=+4258.358635154" observedRunningTime="2026-02-03 13:15:42.088664182 +0000 UTC m=+4259.611740046" watchObservedRunningTime="2026-02-03 13:15:42.091531408 +0000 UTC m=+4259.614607272" Feb 03 13:15:42 crc kubenswrapper[4820]: I0203 13:15:42.112676 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pl75z" podStartSLOduration=3.7127564570000002 podStartE2EDuration="11.112647831s" podCreationTimestamp="2026-02-03 13:15:31 +0000 UTC" firstStartedPulling="2026-02-03 13:15:33.518693527 +0000 UTC m=+4251.041769391" lastFinishedPulling="2026-02-03 13:15:40.918584891 +0000 UTC m=+4258.441660765" observedRunningTime="2026-02-03 13:15:42.106288932 +0000 UTC m=+4259.629364796" watchObservedRunningTime="2026-02-03 13:15:42.112647831 +0000 UTC m=+4259.635723695" Feb 03 13:15:42 crc kubenswrapper[4820]: I0203 13:15:42.352881 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:42 crc kubenswrapper[4820]: I0203 13:15:42.353243 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:43 crc kubenswrapper[4820]: I0203 13:15:43.561564 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pl75z" podUID="424dd878-90cb-48c9-897b-0fb45d37a08f" containerName="registry-server" probeResult="failure" output=< Feb 03 13:15:43 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 13:15:43 crc kubenswrapper[4820]: > Feb 03 13:15:51 crc kubenswrapper[4820]: I0203 13:15:51.797981 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:51 crc kubenswrapper[4820]: I0203 13:15:51.798669 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:51 crc kubenswrapper[4820]: I0203 13:15:51.848281 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:52 crc kubenswrapper[4820]: I0203 13:15:52.229705 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:52 crc kubenswrapper[4820]: I0203 13:15:52.314175 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4dfd4"] Feb 03 13:15:52 crc kubenswrapper[4820]: I0203 13:15:52.399375 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:52 crc kubenswrapper[4820]: I0203 13:15:52.449182 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:54 crc kubenswrapper[4820]: I0203 13:15:54.195366 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4dfd4" podUID="73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" containerName="registry-server" containerID="cri-o://d5ac765e15b14832570652ac6df0d2f4afebbc2df991323686d42169ac2cbb4c" gracePeriod=2 Feb 03 13:15:54 crc kubenswrapper[4820]: I0203 13:15:54.591133 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pl75z"] Feb 03 13:15:54 crc kubenswrapper[4820]: I0203 13:15:54.591369 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pl75z" podUID="424dd878-90cb-48c9-897b-0fb45d37a08f" containerName="registry-server" containerID="cri-o://c4ca2f691cfe8d0af8774838f9bb98a3841d10f824593aca25d11ef99473783f" gracePeriod=2 Feb 03 13:15:54 crc kubenswrapper[4820]: I0203 13:15:54.966362 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.087323 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-catalog-content\") pod \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\" (UID: \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\") " Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.087594 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc66d\" (UniqueName: \"kubernetes.io/projected/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-kube-api-access-zc66d\") pod \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\" (UID: \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\") " Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.087714 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-utilities\") pod \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\" (UID: \"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d\") " Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.088810 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-utilities" (OuterVolumeSpecName: "utilities") pod "73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" (UID: "73dc9ce4-8aa6-47dc-ac60-7232738f9b0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.095955 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-kube-api-access-zc66d" (OuterVolumeSpecName: "kube-api-access-zc66d") pod "73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" (UID: "73dc9ce4-8aa6-47dc-ac60-7232738f9b0d"). InnerVolumeSpecName "kube-api-access-zc66d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.107885 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.157798 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" (UID: "73dc9ce4-8aa6-47dc-ac60-7232738f9b0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.189682 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.189862 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.189880 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc66d\" (UniqueName: \"kubernetes.io/projected/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d-kube-api-access-zc66d\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.208305 4820 generic.go:334] "Generic (PLEG): container finished" podID="73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" containerID="d5ac765e15b14832570652ac6df0d2f4afebbc2df991323686d42169ac2cbb4c" exitCode=0 Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.208401 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dfd4" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.208412 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dfd4" event={"ID":"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d","Type":"ContainerDied","Data":"d5ac765e15b14832570652ac6df0d2f4afebbc2df991323686d42169ac2cbb4c"} Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.208467 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dfd4" event={"ID":"73dc9ce4-8aa6-47dc-ac60-7232738f9b0d","Type":"ContainerDied","Data":"57dd96be175437a017ef90884267c0b77c2fd711f3dee2ef2054bac726ef2b04"} Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.208486 4820 scope.go:117] "RemoveContainer" containerID="d5ac765e15b14832570652ac6df0d2f4afebbc2df991323686d42169ac2cbb4c" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.212133 4820 generic.go:334] "Generic (PLEG): container finished" podID="424dd878-90cb-48c9-897b-0fb45d37a08f" containerID="c4ca2f691cfe8d0af8774838f9bb98a3841d10f824593aca25d11ef99473783f" exitCode=0 Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.212177 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl75z" event={"ID":"424dd878-90cb-48c9-897b-0fb45d37a08f","Type":"ContainerDied","Data":"c4ca2f691cfe8d0af8774838f9bb98a3841d10f824593aca25d11ef99473783f"} Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.212210 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl75z" event={"ID":"424dd878-90cb-48c9-897b-0fb45d37a08f","Type":"ContainerDied","Data":"754143216c785bfa70f64eb1739a9ff4467445dfaeac4085ea5a96e331882715"} Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.212246 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pl75z" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.238921 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4dfd4"] Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.241960 4820 scope.go:117] "RemoveContainer" containerID="74b61fa292ba8e997e50b940fbfaa59d7219f0b0bcca8dc5f41c4b4b44008244" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.248730 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4dfd4"] Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.266375 4820 scope.go:117] "RemoveContainer" containerID="af872cd68963d42c1c78a51c3ec457ddabab66bc3eab954f30fcab96855d7911" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.289270 4820 scope.go:117] "RemoveContainer" containerID="d5ac765e15b14832570652ac6df0d2f4afebbc2df991323686d42169ac2cbb4c" Feb 03 13:15:55 crc kubenswrapper[4820]: E0203 13:15:55.289777 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5ac765e15b14832570652ac6df0d2f4afebbc2df991323686d42169ac2cbb4c\": container with ID starting with d5ac765e15b14832570652ac6df0d2f4afebbc2df991323686d42169ac2cbb4c not found: ID does not exist" containerID="d5ac765e15b14832570652ac6df0d2f4afebbc2df991323686d42169ac2cbb4c" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.289828 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5ac765e15b14832570652ac6df0d2f4afebbc2df991323686d42169ac2cbb4c"} err="failed to get container status \"d5ac765e15b14832570652ac6df0d2f4afebbc2df991323686d42169ac2cbb4c\": rpc error: code = NotFound desc = could not find container \"d5ac765e15b14832570652ac6df0d2f4afebbc2df991323686d42169ac2cbb4c\": container with ID starting with d5ac765e15b14832570652ac6df0d2f4afebbc2df991323686d42169ac2cbb4c not found: ID does not exist" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.289851 4820 scope.go:117] "RemoveContainer" containerID="74b61fa292ba8e997e50b940fbfaa59d7219f0b0bcca8dc5f41c4b4b44008244" Feb 03 13:15:55 crc kubenswrapper[4820]: E0203 13:15:55.290098 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74b61fa292ba8e997e50b940fbfaa59d7219f0b0bcca8dc5f41c4b4b44008244\": container with ID starting with 74b61fa292ba8e997e50b940fbfaa59d7219f0b0bcca8dc5f41c4b4b44008244 not found: ID does not exist" containerID="74b61fa292ba8e997e50b940fbfaa59d7219f0b0bcca8dc5f41c4b4b44008244" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.290125 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74b61fa292ba8e997e50b940fbfaa59d7219f0b0bcca8dc5f41c4b4b44008244"} err="failed to get container status \"74b61fa292ba8e997e50b940fbfaa59d7219f0b0bcca8dc5f41c4b4b44008244\": rpc error: code = NotFound desc = could not find container \"74b61fa292ba8e997e50b940fbfaa59d7219f0b0bcca8dc5f41c4b4b44008244\": container with ID starting with 74b61fa292ba8e997e50b940fbfaa59d7219f0b0bcca8dc5f41c4b4b44008244 not found: ID does not exist" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.290143 4820 scope.go:117] "RemoveContainer" containerID="af872cd68963d42c1c78a51c3ec457ddabab66bc3eab954f30fcab96855d7911" Feb 03 13:15:55 crc kubenswrapper[4820]: E0203 13:15:55.290344 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af872cd68963d42c1c78a51c3ec457ddabab66bc3eab954f30fcab96855d7911\": container with ID starting with af872cd68963d42c1c78a51c3ec457ddabab66bc3eab954f30fcab96855d7911 not found: ID does not exist" containerID="af872cd68963d42c1c78a51c3ec457ddabab66bc3eab954f30fcab96855d7911" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.290371 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af872cd68963d42c1c78a51c3ec457ddabab66bc3eab954f30fcab96855d7911"} err="failed to get container status \"af872cd68963d42c1c78a51c3ec457ddabab66bc3eab954f30fcab96855d7911\": rpc error: code = NotFound desc = could not find container \"af872cd68963d42c1c78a51c3ec457ddabab66bc3eab954f30fcab96855d7911\": container with ID starting with af872cd68963d42c1c78a51c3ec457ddabab66bc3eab954f30fcab96855d7911 not found: ID does not exist" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.290392 4820 scope.go:117] "RemoveContainer" containerID="c4ca2f691cfe8d0af8774838f9bb98a3841d10f824593aca25d11ef99473783f" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.290953 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424dd878-90cb-48c9-897b-0fb45d37a08f-utilities\") pod \"424dd878-90cb-48c9-897b-0fb45d37a08f\" (UID: \"424dd878-90cb-48c9-897b-0fb45d37a08f\") " Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.291088 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2hm4\" (UniqueName: \"kubernetes.io/projected/424dd878-90cb-48c9-897b-0fb45d37a08f-kube-api-access-v2hm4\") pod \"424dd878-90cb-48c9-897b-0fb45d37a08f\" (UID: \"424dd878-90cb-48c9-897b-0fb45d37a08f\") " Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.291112 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424dd878-90cb-48c9-897b-0fb45d37a08f-catalog-content\") pod \"424dd878-90cb-48c9-897b-0fb45d37a08f\" (UID: \"424dd878-90cb-48c9-897b-0fb45d37a08f\") " Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.291525 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/424dd878-90cb-48c9-897b-0fb45d37a08f-utilities" (OuterVolumeSpecName: "utilities") pod "424dd878-90cb-48c9-897b-0fb45d37a08f" (UID: "424dd878-90cb-48c9-897b-0fb45d37a08f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.291874 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424dd878-90cb-48c9-897b-0fb45d37a08f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.294519 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/424dd878-90cb-48c9-897b-0fb45d37a08f-kube-api-access-v2hm4" (OuterVolumeSpecName: "kube-api-access-v2hm4") pod "424dd878-90cb-48c9-897b-0fb45d37a08f" (UID: "424dd878-90cb-48c9-897b-0fb45d37a08f"). InnerVolumeSpecName "kube-api-access-v2hm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.325549 4820 scope.go:117] "RemoveContainer" containerID="8010fb14bdeff27e912aeec426011b32c0f2f0d35cc10c20d6122a27bcc6cb61" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.357301 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/424dd878-90cb-48c9-897b-0fb45d37a08f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "424dd878-90cb-48c9-897b-0fb45d37a08f" (UID: "424dd878-90cb-48c9-897b-0fb45d37a08f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.385441 4820 scope.go:117] "RemoveContainer" containerID="e4fea22cdcdeb67ad74c8b434af1ef21dad56fae01a126f04a62081eb556f928" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.394318 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2hm4\" (UniqueName: \"kubernetes.io/projected/424dd878-90cb-48c9-897b-0fb45d37a08f-kube-api-access-v2hm4\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.394348 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424dd878-90cb-48c9-897b-0fb45d37a08f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.441581 4820 scope.go:117] "RemoveContainer" containerID="c4ca2f691cfe8d0af8774838f9bb98a3841d10f824593aca25d11ef99473783f" Feb 03 13:15:55 crc kubenswrapper[4820]: E0203 13:15:55.442217 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ca2f691cfe8d0af8774838f9bb98a3841d10f824593aca25d11ef99473783f\": container with ID starting with c4ca2f691cfe8d0af8774838f9bb98a3841d10f824593aca25d11ef99473783f not found: ID does not exist" containerID="c4ca2f691cfe8d0af8774838f9bb98a3841d10f824593aca25d11ef99473783f" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.442254 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ca2f691cfe8d0af8774838f9bb98a3841d10f824593aca25d11ef99473783f"} err="failed to get container status \"c4ca2f691cfe8d0af8774838f9bb98a3841d10f824593aca25d11ef99473783f\": rpc error: code = NotFound desc = could not find container \"c4ca2f691cfe8d0af8774838f9bb98a3841d10f824593aca25d11ef99473783f\": container with ID starting with c4ca2f691cfe8d0af8774838f9bb98a3841d10f824593aca25d11ef99473783f not found: ID does not exist" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.442304 4820 scope.go:117] "RemoveContainer" containerID="8010fb14bdeff27e912aeec426011b32c0f2f0d35cc10c20d6122a27bcc6cb61" Feb 03 13:15:55 crc kubenswrapper[4820]: E0203 13:15:55.443055 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8010fb14bdeff27e912aeec426011b32c0f2f0d35cc10c20d6122a27bcc6cb61\": container with ID starting with 8010fb14bdeff27e912aeec426011b32c0f2f0d35cc10c20d6122a27bcc6cb61 not found: ID does not exist" containerID="8010fb14bdeff27e912aeec426011b32c0f2f0d35cc10c20d6122a27bcc6cb61" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.443085 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8010fb14bdeff27e912aeec426011b32c0f2f0d35cc10c20d6122a27bcc6cb61"} err="failed to get container status \"8010fb14bdeff27e912aeec426011b32c0f2f0d35cc10c20d6122a27bcc6cb61\": rpc error: code = NotFound desc = could not find container \"8010fb14bdeff27e912aeec426011b32c0f2f0d35cc10c20d6122a27bcc6cb61\": container with ID starting with 8010fb14bdeff27e912aeec426011b32c0f2f0d35cc10c20d6122a27bcc6cb61 not found: ID does not exist" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.443111 4820 scope.go:117] "RemoveContainer" containerID="e4fea22cdcdeb67ad74c8b434af1ef21dad56fae01a126f04a62081eb556f928" Feb 03 13:15:55 crc kubenswrapper[4820]: E0203 13:15:55.443584 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4fea22cdcdeb67ad74c8b434af1ef21dad56fae01a126f04a62081eb556f928\": container with ID starting with e4fea22cdcdeb67ad74c8b434af1ef21dad56fae01a126f04a62081eb556f928 not found: ID does not exist" containerID="e4fea22cdcdeb67ad74c8b434af1ef21dad56fae01a126f04a62081eb556f928" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.443651 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4fea22cdcdeb67ad74c8b434af1ef21dad56fae01a126f04a62081eb556f928"} err="failed to get container status \"e4fea22cdcdeb67ad74c8b434af1ef21dad56fae01a126f04a62081eb556f928\": rpc error: code = NotFound desc = could not find container \"e4fea22cdcdeb67ad74c8b434af1ef21dad56fae01a126f04a62081eb556f928\": container with ID starting with e4fea22cdcdeb67ad74c8b434af1ef21dad56fae01a126f04a62081eb556f928 not found: ID does not exist" Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.562776 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pl75z"] Feb 03 13:15:55 crc kubenswrapper[4820]: I0203 13:15:55.571461 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pl75z"] Feb 03 13:15:57 crc kubenswrapper[4820]: I0203 13:15:57.154811 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="424dd878-90cb-48c9-897b-0fb45d37a08f" path="/var/lib/kubelet/pods/424dd878-90cb-48c9-897b-0fb45d37a08f/volumes" Feb 03 13:15:57 crc kubenswrapper[4820]: I0203 13:15:57.156003 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" path="/var/lib/kubelet/pods/73dc9ce4-8aa6-47dc-ac60-7232738f9b0d/volumes" Feb 03 13:17:01 crc kubenswrapper[4820]: I0203 13:17:01.365535 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:17:01 crc kubenswrapper[4820]: I0203 13:17:01.366627 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:17:31 crc kubenswrapper[4820]: I0203 13:17:31.365835 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:17:31 crc kubenswrapper[4820]: I0203 13:17:31.366462 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:18:01 crc kubenswrapper[4820]: I0203 13:18:01.365527 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:18:01 crc kubenswrapper[4820]: I0203 13:18:01.366122 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:18:01 crc kubenswrapper[4820]: I0203 13:18:01.366235 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 13:18:01 crc kubenswrapper[4820]: I0203 13:18:01.367168 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:18:01 crc kubenswrapper[4820]: I0203 13:18:01.367265 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" gracePeriod=600 Feb 03 13:18:01 crc kubenswrapper[4820]: E0203 13:18:01.490329 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:18:01 crc kubenswrapper[4820]: I0203 13:18:01.638731 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" exitCode=0 Feb 03 13:18:01 crc kubenswrapper[4820]: I0203 13:18:01.638818 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376"} Feb 03 13:18:01 crc kubenswrapper[4820]: I0203 13:18:01.639112 4820 scope.go:117] "RemoveContainer" containerID="f5152e771cae7873613e52c0fe6409fb9c277d69a21d89b557616fcf979b6606" Feb 03 13:18:01 crc kubenswrapper[4820]: I0203 13:18:01.640040 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:18:01 crc kubenswrapper[4820]: E0203 13:18:01.640404 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.405427 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-92m48"] Feb 03 13:18:03 crc kubenswrapper[4820]: E0203 13:18:03.406633 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424dd878-90cb-48c9-897b-0fb45d37a08f" containerName="extract-content" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.406654 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="424dd878-90cb-48c9-897b-0fb45d37a08f" containerName="extract-content" Feb 03 13:18:03 crc kubenswrapper[4820]: E0203 13:18:03.406691 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" containerName="registry-server" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.406701 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" containerName="registry-server" Feb 03 13:18:03 crc kubenswrapper[4820]: E0203 13:18:03.406711 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424dd878-90cb-48c9-897b-0fb45d37a08f" containerName="registry-server" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.406718 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="424dd878-90cb-48c9-897b-0fb45d37a08f" containerName="registry-server" Feb 03 13:18:03 crc kubenswrapper[4820]: E0203 13:18:03.406738 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" containerName="extract-content" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.406744 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" containerName="extract-content" Feb 03 13:18:03 crc kubenswrapper[4820]: E0203 13:18:03.406780 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424dd878-90cb-48c9-897b-0fb45d37a08f" containerName="extract-utilities" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.406787 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="424dd878-90cb-48c9-897b-0fb45d37a08f" containerName="extract-utilities" Feb 03 13:18:03 crc kubenswrapper[4820]: E0203 13:18:03.406798 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" containerName="extract-utilities" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.406805 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" containerName="extract-utilities" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.407379 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="73dc9ce4-8aa6-47dc-ac60-7232738f9b0d" containerName="registry-server" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.407551 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="424dd878-90cb-48c9-897b-0fb45d37a08f" containerName="registry-server" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.410260 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.430475 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92m48"] Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.537569 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1163c2ee-dfc2-4376-b628-beeff3dbc115-catalog-content\") pod \"redhat-marketplace-92m48\" (UID: \"1163c2ee-dfc2-4376-b628-beeff3dbc115\") " pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.538970 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1163c2ee-dfc2-4376-b628-beeff3dbc115-utilities\") pod \"redhat-marketplace-92m48\" (UID: \"1163c2ee-dfc2-4376-b628-beeff3dbc115\") " pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.539045 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7grw\" (UniqueName: \"kubernetes.io/projected/1163c2ee-dfc2-4376-b628-beeff3dbc115-kube-api-access-l7grw\") pod \"redhat-marketplace-92m48\" (UID: \"1163c2ee-dfc2-4376-b628-beeff3dbc115\") " pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.641731 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7grw\" (UniqueName: \"kubernetes.io/projected/1163c2ee-dfc2-4376-b628-beeff3dbc115-kube-api-access-l7grw\") pod \"redhat-marketplace-92m48\" (UID: \"1163c2ee-dfc2-4376-b628-beeff3dbc115\") " pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.642033 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1163c2ee-dfc2-4376-b628-beeff3dbc115-catalog-content\") pod \"redhat-marketplace-92m48\" (UID: \"1163c2ee-dfc2-4376-b628-beeff3dbc115\") " pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.642080 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1163c2ee-dfc2-4376-b628-beeff3dbc115-utilities\") pod \"redhat-marketplace-92m48\" (UID: \"1163c2ee-dfc2-4376-b628-beeff3dbc115\") " pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.642598 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1163c2ee-dfc2-4376-b628-beeff3dbc115-catalog-content\") pod \"redhat-marketplace-92m48\" (UID: \"1163c2ee-dfc2-4376-b628-beeff3dbc115\") " pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.642691 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1163c2ee-dfc2-4376-b628-beeff3dbc115-utilities\") pod \"redhat-marketplace-92m48\" (UID: \"1163c2ee-dfc2-4376-b628-beeff3dbc115\") " pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.668023 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7grw\" (UniqueName: \"kubernetes.io/projected/1163c2ee-dfc2-4376-b628-beeff3dbc115-kube-api-access-l7grw\") pod \"redhat-marketplace-92m48\" (UID: \"1163c2ee-dfc2-4376-b628-beeff3dbc115\") " pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:03 crc kubenswrapper[4820]: I0203 13:18:03.745179 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:04 crc kubenswrapper[4820]: I0203 13:18:04.408577 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92m48"] Feb 03 13:18:04 crc kubenswrapper[4820]: I0203 13:18:04.669531 4820 generic.go:334] "Generic (PLEG): container finished" podID="1163c2ee-dfc2-4376-b628-beeff3dbc115" containerID="76d6d65d0ac1b4b7352ce64937f2e6dabb913c2bea7829ccbca2d3c7b27f5896" exitCode=0 Feb 03 13:18:04 crc kubenswrapper[4820]: I0203 13:18:04.669587 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92m48" event={"ID":"1163c2ee-dfc2-4376-b628-beeff3dbc115","Type":"ContainerDied","Data":"76d6d65d0ac1b4b7352ce64937f2e6dabb913c2bea7829ccbca2d3c7b27f5896"} Feb 03 13:18:04 crc kubenswrapper[4820]: I0203 13:18:04.669844 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92m48" event={"ID":"1163c2ee-dfc2-4376-b628-beeff3dbc115","Type":"ContainerStarted","Data":"ccd9adb7af0bb357665c7a8880c7c56b55dd1837bced5ec1939704480da3f6a8"} Feb 03 13:18:06 crc kubenswrapper[4820]: I0203 13:18:06.689561 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92m48" event={"ID":"1163c2ee-dfc2-4376-b628-beeff3dbc115","Type":"ContainerStarted","Data":"b0d0148b6124202e045cea051e120095fe4f7fe86f0869c83d7a818ccd8dd969"} Feb 03 13:18:07 crc kubenswrapper[4820]: I0203 13:18:07.701038 4820 generic.go:334] "Generic (PLEG): container finished" podID="1163c2ee-dfc2-4376-b628-beeff3dbc115" containerID="b0d0148b6124202e045cea051e120095fe4f7fe86f0869c83d7a818ccd8dd969" exitCode=0 Feb 03 13:18:07 crc kubenswrapper[4820]: I0203 13:18:07.701111 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92m48" event={"ID":"1163c2ee-dfc2-4376-b628-beeff3dbc115","Type":"ContainerDied","Data":"b0d0148b6124202e045cea051e120095fe4f7fe86f0869c83d7a818ccd8dd969"} Feb 03 13:18:08 crc kubenswrapper[4820]: I0203 13:18:08.716360 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92m48" event={"ID":"1163c2ee-dfc2-4376-b628-beeff3dbc115","Type":"ContainerStarted","Data":"ce31e7d763e0079da29dfafd17cf358b2d4d8940bd6bb203a7d2671b215f6da6"} Feb 03 13:18:08 crc kubenswrapper[4820]: I0203 13:18:08.764797 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-92m48" podStartSLOduration=2.307847984 podStartE2EDuration="5.764738797s" podCreationTimestamp="2026-02-03 13:18:03 +0000 UTC" firstStartedPulling="2026-02-03 13:18:04.671270811 +0000 UTC m=+4402.194346675" lastFinishedPulling="2026-02-03 13:18:08.128161634 +0000 UTC m=+4405.651237488" observedRunningTime="2026-02-03 13:18:08.758568313 +0000 UTC m=+4406.281644197" watchObservedRunningTime="2026-02-03 13:18:08.764738797 +0000 UTC m=+4406.287814671" Feb 03 13:18:13 crc kubenswrapper[4820]: I0203 13:18:13.151188 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:18:13 crc kubenswrapper[4820]: E0203 13:18:13.152006 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:18:13 crc kubenswrapper[4820]: I0203 13:18:13.745406 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:13 crc kubenswrapper[4820]: I0203 13:18:13.745714 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:13 crc kubenswrapper[4820]: I0203 13:18:13.796416 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:14 crc kubenswrapper[4820]: I0203 13:18:14.840145 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:14 crc kubenswrapper[4820]: I0203 13:18:14.912035 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-92m48"] Feb 03 13:18:16 crc kubenswrapper[4820]: I0203 13:18:16.805183 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-92m48" podUID="1163c2ee-dfc2-4376-b628-beeff3dbc115" containerName="registry-server" containerID="cri-o://ce31e7d763e0079da29dfafd17cf358b2d4d8940bd6bb203a7d2671b215f6da6" gracePeriod=2 Feb 03 13:18:17 crc kubenswrapper[4820]: I0203 13:18:17.821515 4820 generic.go:334] "Generic (PLEG): container finished" podID="1163c2ee-dfc2-4376-b628-beeff3dbc115" containerID="ce31e7d763e0079da29dfafd17cf358b2d4d8940bd6bb203a7d2671b215f6da6" exitCode=0 Feb 03 13:18:17 crc kubenswrapper[4820]: I0203 13:18:17.821665 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92m48" event={"ID":"1163c2ee-dfc2-4376-b628-beeff3dbc115","Type":"ContainerDied","Data":"ce31e7d763e0079da29dfafd17cf358b2d4d8940bd6bb203a7d2671b215f6da6"} Feb 03 13:18:17 crc kubenswrapper[4820]: I0203 13:18:17.985409 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.061234 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1163c2ee-dfc2-4376-b628-beeff3dbc115-catalog-content\") pod \"1163c2ee-dfc2-4376-b628-beeff3dbc115\" (UID: \"1163c2ee-dfc2-4376-b628-beeff3dbc115\") " Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.061353 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1163c2ee-dfc2-4376-b628-beeff3dbc115-utilities\") pod \"1163c2ee-dfc2-4376-b628-beeff3dbc115\" (UID: \"1163c2ee-dfc2-4376-b628-beeff3dbc115\") " Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.061549 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7grw\" (UniqueName: \"kubernetes.io/projected/1163c2ee-dfc2-4376-b628-beeff3dbc115-kube-api-access-l7grw\") pod \"1163c2ee-dfc2-4376-b628-beeff3dbc115\" (UID: \"1163c2ee-dfc2-4376-b628-beeff3dbc115\") " Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.062459 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1163c2ee-dfc2-4376-b628-beeff3dbc115-utilities" (OuterVolumeSpecName: "utilities") pod "1163c2ee-dfc2-4376-b628-beeff3dbc115" (UID: "1163c2ee-dfc2-4376-b628-beeff3dbc115"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.067212 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1163c2ee-dfc2-4376-b628-beeff3dbc115-kube-api-access-l7grw" (OuterVolumeSpecName: "kube-api-access-l7grw") pod "1163c2ee-dfc2-4376-b628-beeff3dbc115" (UID: "1163c2ee-dfc2-4376-b628-beeff3dbc115"). InnerVolumeSpecName "kube-api-access-l7grw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.095632 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1163c2ee-dfc2-4376-b628-beeff3dbc115-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1163c2ee-dfc2-4376-b628-beeff3dbc115" (UID: "1163c2ee-dfc2-4376-b628-beeff3dbc115"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.164827 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1163c2ee-dfc2-4376-b628-beeff3dbc115-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.164925 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1163c2ee-dfc2-4376-b628-beeff3dbc115-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.164942 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7grw\" (UniqueName: \"kubernetes.io/projected/1163c2ee-dfc2-4376-b628-beeff3dbc115-kube-api-access-l7grw\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.834240 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92m48" event={"ID":"1163c2ee-dfc2-4376-b628-beeff3dbc115","Type":"ContainerDied","Data":"ccd9adb7af0bb357665c7a8880c7c56b55dd1837bced5ec1939704480da3f6a8"} Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.834349 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92m48" Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.834529 4820 scope.go:117] "RemoveContainer" containerID="ce31e7d763e0079da29dfafd17cf358b2d4d8940bd6bb203a7d2671b215f6da6" Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.860475 4820 scope.go:117] "RemoveContainer" containerID="b0d0148b6124202e045cea051e120095fe4f7fe86f0869c83d7a818ccd8dd969" Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.884268 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-92m48"] Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.889964 4820 scope.go:117] "RemoveContainer" containerID="76d6d65d0ac1b4b7352ce64937f2e6dabb913c2bea7829ccbca2d3c7b27f5896" Feb 03 13:18:18 crc kubenswrapper[4820]: I0203 13:18:18.898393 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-92m48"] Feb 03 13:18:19 crc kubenswrapper[4820]: I0203 13:18:19.162001 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1163c2ee-dfc2-4376-b628-beeff3dbc115" path="/var/lib/kubelet/pods/1163c2ee-dfc2-4376-b628-beeff3dbc115/volumes" Feb 03 13:18:27 crc kubenswrapper[4820]: I0203 13:18:27.143580 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:18:27 crc kubenswrapper[4820]: E0203 13:18:27.144485 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:18:42 crc kubenswrapper[4820]: I0203 13:18:42.143381 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:18:42 crc kubenswrapper[4820]: E0203 13:18:42.144286 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:18:55 crc kubenswrapper[4820]: I0203 13:18:55.149683 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:18:55 crc kubenswrapper[4820]: E0203 13:18:55.151479 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:19:10 crc kubenswrapper[4820]: I0203 13:19:10.144091 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:19:10 crc kubenswrapper[4820]: E0203 13:19:10.144920 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:19:22 crc kubenswrapper[4820]: I0203 13:19:22.143501 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:19:22 crc kubenswrapper[4820]: E0203 13:19:22.144765 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:19:36 crc kubenswrapper[4820]: I0203 13:19:36.142725 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:19:36 crc kubenswrapper[4820]: E0203 13:19:36.143644 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:19:49 crc kubenswrapper[4820]: I0203 13:19:49.143413 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:19:49 crc kubenswrapper[4820]: E0203 13:19:49.144140 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:20:04 crc kubenswrapper[4820]: I0203 13:20:04.143618 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:20:04 crc kubenswrapper[4820]: E0203 13:20:04.144541 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:20:16 crc kubenswrapper[4820]: I0203 13:20:16.143212 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:20:16 crc kubenswrapper[4820]: E0203 13:20:16.144125 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:20:29 crc kubenswrapper[4820]: I0203 13:20:29.143511 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:20:29 crc kubenswrapper[4820]: E0203 13:20:29.144577 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:20:44 crc kubenswrapper[4820]: I0203 13:20:44.143564 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:20:44 crc kubenswrapper[4820]: E0203 13:20:44.144311 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:20:58 crc kubenswrapper[4820]: I0203 13:20:58.146071 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:20:58 crc kubenswrapper[4820]: E0203 13:20:58.147147 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:21:13 crc kubenswrapper[4820]: I0203 13:21:13.150283 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:21:13 crc kubenswrapper[4820]: E0203 13:21:13.151227 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:21:26 crc kubenswrapper[4820]: I0203 13:21:26.331634 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:21:26 crc kubenswrapper[4820]: E0203 13:21:26.332557 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:21:39 crc kubenswrapper[4820]: I0203 13:21:39.143396 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:21:39 crc kubenswrapper[4820]: E0203 13:21:39.144275 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:21:52 crc kubenswrapper[4820]: I0203 13:21:52.143523 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:21:52 crc kubenswrapper[4820]: E0203 13:21:52.144401 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:22:07 crc kubenswrapper[4820]: I0203 13:22:07.142853 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:22:07 crc kubenswrapper[4820]: E0203 13:22:07.143800 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:22:18 crc kubenswrapper[4820]: I0203 13:22:18.143260 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:22:18 crc kubenswrapper[4820]: E0203 13:22:18.144339 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:22:31 crc kubenswrapper[4820]: I0203 13:22:31.145877 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:22:31 crc kubenswrapper[4820]: E0203 13:22:31.146687 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:22:44 crc kubenswrapper[4820]: I0203 13:22:44.143231 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:22:44 crc kubenswrapper[4820]: E0203 13:22:44.143941 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:22:58 crc kubenswrapper[4820]: I0203 13:22:58.144097 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:22:58 crc kubenswrapper[4820]: E0203 13:22:58.145059 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:23:10 crc kubenswrapper[4820]: I0203 13:23:10.142472 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:23:11 crc kubenswrapper[4820]: I0203 13:23:11.226573 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"6136771e846a33033d96d5790c917029ada03c3fd7ec26d72b93b2c6cf3cc1bc"} Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.451136 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-btszd"] Feb 03 13:23:19 crc kubenswrapper[4820]: E0203 13:23:19.452385 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1163c2ee-dfc2-4376-b628-beeff3dbc115" containerName="extract-content" Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.452415 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1163c2ee-dfc2-4376-b628-beeff3dbc115" containerName="extract-content" Feb 03 13:23:19 crc kubenswrapper[4820]: E0203 13:23:19.452457 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1163c2ee-dfc2-4376-b628-beeff3dbc115" containerName="extract-utilities" Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.452466 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1163c2ee-dfc2-4376-b628-beeff3dbc115" containerName="extract-utilities" Feb 03 13:23:19 crc kubenswrapper[4820]: E0203 13:23:19.452489 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1163c2ee-dfc2-4376-b628-beeff3dbc115" containerName="registry-server" Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.452502 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1163c2ee-dfc2-4376-b628-beeff3dbc115" containerName="registry-server" Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.452829 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1163c2ee-dfc2-4376-b628-beeff3dbc115" containerName="registry-server" Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.455170 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.470471 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btszd"] Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.649836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/977ea1bf-f3ac-40c3-8061-bbf78da368c1-utilities\") pod \"redhat-operators-btszd\" (UID: \"977ea1bf-f3ac-40c3-8061-bbf78da368c1\") " pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.649881 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/977ea1bf-f3ac-40c3-8061-bbf78da368c1-catalog-content\") pod \"redhat-operators-btszd\" (UID: \"977ea1bf-f3ac-40c3-8061-bbf78da368c1\") " pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.650003 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv7db\" (UniqueName: \"kubernetes.io/projected/977ea1bf-f3ac-40c3-8061-bbf78da368c1-kube-api-access-fv7db\") pod \"redhat-operators-btszd\" (UID: \"977ea1bf-f3ac-40c3-8061-bbf78da368c1\") " pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.752243 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/977ea1bf-f3ac-40c3-8061-bbf78da368c1-utilities\") pod \"redhat-operators-btszd\" (UID: \"977ea1bf-f3ac-40c3-8061-bbf78da368c1\") " pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.752846 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/977ea1bf-f3ac-40c3-8061-bbf78da368c1-catalog-content\") pod \"redhat-operators-btszd\" (UID: \"977ea1bf-f3ac-40c3-8061-bbf78da368c1\") " pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.753467 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/977ea1bf-f3ac-40c3-8061-bbf78da368c1-catalog-content\") pod \"redhat-operators-btszd\" (UID: \"977ea1bf-f3ac-40c3-8061-bbf78da368c1\") " pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.753599 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7db\" (UniqueName: \"kubernetes.io/projected/977ea1bf-f3ac-40c3-8061-bbf78da368c1-kube-api-access-fv7db\") pod \"redhat-operators-btszd\" (UID: \"977ea1bf-f3ac-40c3-8061-bbf78da368c1\") " pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.761186 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/977ea1bf-f3ac-40c3-8061-bbf78da368c1-utilities\") pod \"redhat-operators-btszd\" (UID: \"977ea1bf-f3ac-40c3-8061-bbf78da368c1\") " pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:19 crc kubenswrapper[4820]: I0203 13:23:19.789865 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv7db\" (UniqueName: \"kubernetes.io/projected/977ea1bf-f3ac-40c3-8061-bbf78da368c1-kube-api-access-fv7db\") pod \"redhat-operators-btszd\" (UID: \"977ea1bf-f3ac-40c3-8061-bbf78da368c1\") " pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:20 crc kubenswrapper[4820]: I0203 13:23:20.083955 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:20 crc kubenswrapper[4820]: I0203 13:23:20.645368 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btszd"] Feb 03 13:23:21 crc kubenswrapper[4820]: I0203 13:23:21.521222 4820 generic.go:334] "Generic (PLEG): container finished" podID="977ea1bf-f3ac-40c3-8061-bbf78da368c1" containerID="3898635e2ef78bd39fb96a17d17f769d6f8c4645e5778e7eeb3815b90525b8ba" exitCode=0 Feb 03 13:23:21 crc kubenswrapper[4820]: I0203 13:23:21.521496 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btszd" event={"ID":"977ea1bf-f3ac-40c3-8061-bbf78da368c1","Type":"ContainerDied","Data":"3898635e2ef78bd39fb96a17d17f769d6f8c4645e5778e7eeb3815b90525b8ba"} Feb 03 13:23:21 crc kubenswrapper[4820]: I0203 13:23:21.522112 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btszd" event={"ID":"977ea1bf-f3ac-40c3-8061-bbf78da368c1","Type":"ContainerStarted","Data":"201e625ebcc9f6c7aace1edcac65207d5ed3ea480094a5acd85f62b27898a822"} Feb 03 13:23:21 crc kubenswrapper[4820]: I0203 13:23:21.523551 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:23:35 crc kubenswrapper[4820]: I0203 13:23:35.681359 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btszd" event={"ID":"977ea1bf-f3ac-40c3-8061-bbf78da368c1","Type":"ContainerStarted","Data":"6c0d1d7dda0f295797bc0e37d4a9e9cafb145da841b0bff3b144c6b1b5f2c8fa"} Feb 03 13:23:38 crc kubenswrapper[4820]: I0203 13:23:38.712072 4820 generic.go:334] "Generic (PLEG): container finished" podID="977ea1bf-f3ac-40c3-8061-bbf78da368c1" containerID="6c0d1d7dda0f295797bc0e37d4a9e9cafb145da841b0bff3b144c6b1b5f2c8fa" exitCode=0 Feb 03 13:23:38 crc kubenswrapper[4820]: I0203 13:23:38.712164 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btszd" event={"ID":"977ea1bf-f3ac-40c3-8061-bbf78da368c1","Type":"ContainerDied","Data":"6c0d1d7dda0f295797bc0e37d4a9e9cafb145da841b0bff3b144c6b1b5f2c8fa"} Feb 03 13:23:39 crc kubenswrapper[4820]: I0203 13:23:39.725615 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btszd" event={"ID":"977ea1bf-f3ac-40c3-8061-bbf78da368c1","Type":"ContainerStarted","Data":"2c45dca3ab6522ae99aac87dd1212f6b1c4543335d5dd5a64c9c29442d3a53b2"} Feb 03 13:23:39 crc kubenswrapper[4820]: I0203 13:23:39.744571 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-btszd" podStartSLOduration=3.00630919 podStartE2EDuration="20.744532589s" podCreationTimestamp="2026-02-03 13:23:19 +0000 UTC" firstStartedPulling="2026-02-03 13:23:21.523137693 +0000 UTC m=+4719.046213557" lastFinishedPulling="2026-02-03 13:23:39.261361092 +0000 UTC m=+4736.784436956" observedRunningTime="2026-02-03 13:23:39.741382386 +0000 UTC m=+4737.264458260" watchObservedRunningTime="2026-02-03 13:23:39.744532589 +0000 UTC m=+4737.267608453" Feb 03 13:23:40 crc kubenswrapper[4820]: I0203 13:23:40.084033 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:40 crc kubenswrapper[4820]: I0203 13:23:40.084080 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:41 crc kubenswrapper[4820]: I0203 13:23:41.586056 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-btszd" podUID="977ea1bf-f3ac-40c3-8061-bbf78da368c1" containerName="registry-server" probeResult="failure" output=< Feb 03 13:23:41 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 13:23:41 crc kubenswrapper[4820]: > Feb 03 13:23:50 crc kubenswrapper[4820]: I0203 13:23:50.145000 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:50 crc kubenswrapper[4820]: I0203 13:23:50.204721 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-btszd" Feb 03 13:23:50 crc kubenswrapper[4820]: I0203 13:23:50.282133 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btszd"] Feb 03 13:23:50 crc kubenswrapper[4820]: I0203 13:23:50.399720 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cvs64"] Feb 03 13:23:50 crc kubenswrapper[4820]: I0203 13:23:50.400076 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cvs64" podUID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" containerName="registry-server" containerID="cri-o://c2b39da05e8caa4a353eea9c162813256e6fac719e2a111814944772c66f27b0" gracePeriod=2 Feb 03 13:23:50 crc kubenswrapper[4820]: I0203 13:23:50.832996 4820 generic.go:334] "Generic (PLEG): container finished" podID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" containerID="c2b39da05e8caa4a353eea9c162813256e6fac719e2a111814944772c66f27b0" exitCode=0 Feb 03 13:23:50 crc kubenswrapper[4820]: I0203 13:23:50.834124 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvs64" event={"ID":"c4679b70-6d4c-47db-96eb-0bc13e2469d8","Type":"ContainerDied","Data":"c2b39da05e8caa4a353eea9c162813256e6fac719e2a111814944772c66f27b0"} Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.089414 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.199265 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrr6v\" (UniqueName: \"kubernetes.io/projected/c4679b70-6d4c-47db-96eb-0bc13e2469d8-kube-api-access-wrr6v\") pod \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\" (UID: \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\") " Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.199461 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4679b70-6d4c-47db-96eb-0bc13e2469d8-catalog-content\") pod \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\" (UID: \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\") " Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.199702 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4679b70-6d4c-47db-96eb-0bc13e2469d8-utilities\") pod \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\" (UID: \"c4679b70-6d4c-47db-96eb-0bc13e2469d8\") " Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.200774 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4679b70-6d4c-47db-96eb-0bc13e2469d8-utilities" (OuterVolumeSpecName: "utilities") pod "c4679b70-6d4c-47db-96eb-0bc13e2469d8" (UID: "c4679b70-6d4c-47db-96eb-0bc13e2469d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.201339 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4679b70-6d4c-47db-96eb-0bc13e2469d8-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.206271 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4679b70-6d4c-47db-96eb-0bc13e2469d8-kube-api-access-wrr6v" (OuterVolumeSpecName: "kube-api-access-wrr6v") pod "c4679b70-6d4c-47db-96eb-0bc13e2469d8" (UID: "c4679b70-6d4c-47db-96eb-0bc13e2469d8"). InnerVolumeSpecName "kube-api-access-wrr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.303507 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrr6v\" (UniqueName: \"kubernetes.io/projected/c4679b70-6d4c-47db-96eb-0bc13e2469d8-kube-api-access-wrr6v\") on node \"crc\" DevicePath \"\"" Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.306691 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4679b70-6d4c-47db-96eb-0bc13e2469d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4679b70-6d4c-47db-96eb-0bc13e2469d8" (UID: "c4679b70-6d4c-47db-96eb-0bc13e2469d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.405937 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4679b70-6d4c-47db-96eb-0bc13e2469d8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.845566 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvs64" event={"ID":"c4679b70-6d4c-47db-96eb-0bc13e2469d8","Type":"ContainerDied","Data":"e57a6f1c79b2071a21204dba39c0f1a9b12a5bbcb6be2c279766df742aa311f0"} Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.845609 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cvs64" Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.845642 4820 scope.go:117] "RemoveContainer" containerID="c2b39da05e8caa4a353eea9c162813256e6fac719e2a111814944772c66f27b0" Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.878736 4820 scope.go:117] "RemoveContainer" containerID="31eeecf46c591cf1d3c4f703c9c4683d2d3b02ee5a94103ee723f6782a032b11" Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.894013 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cvs64"] Feb 03 13:23:51 crc kubenswrapper[4820]: I0203 13:23:51.907141 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cvs64"] Feb 03 13:23:52 crc kubenswrapper[4820]: I0203 13:23:52.064464 4820 scope.go:117] "RemoveContainer" containerID="8d616b37cb751a0a9724811c0a0044210042561998968dd175619fdd9813e094" Feb 03 13:23:53 crc kubenswrapper[4820]: I0203 13:23:53.154622 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" path="/var/lib/kubelet/pods/c4679b70-6d4c-47db-96eb-0bc13e2469d8/volumes" Feb 03 13:25:31 crc kubenswrapper[4820]: I0203 13:25:31.365248 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:25:31 crc kubenswrapper[4820]: I0203 13:25:31.365951 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:26:01 crc kubenswrapper[4820]: I0203 13:26:01.365443 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:26:01 crc kubenswrapper[4820]: I0203 13:26:01.366086 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:26:31 crc kubenswrapper[4820]: I0203 13:26:31.365451 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:26:31 crc kubenswrapper[4820]: I0203 13:26:31.366100 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:26:31 crc kubenswrapper[4820]: I0203 13:26:31.366181 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 13:26:31 crc kubenswrapper[4820]: I0203 13:26:31.367222 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6136771e846a33033d96d5790c917029ada03c3fd7ec26d72b93b2c6cf3cc1bc"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:26:31 crc kubenswrapper[4820]: I0203 13:26:31.367290 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://6136771e846a33033d96d5790c917029ada03c3fd7ec26d72b93b2c6cf3cc1bc" gracePeriod=600 Feb 03 13:26:31 crc kubenswrapper[4820]: I0203 13:26:31.695535 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="6136771e846a33033d96d5790c917029ada03c3fd7ec26d72b93b2c6cf3cc1bc" exitCode=0 Feb 03 13:26:31 crc kubenswrapper[4820]: I0203 13:26:31.695621 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"6136771e846a33033d96d5790c917029ada03c3fd7ec26d72b93b2c6cf3cc1bc"} Feb 03 13:26:31 crc kubenswrapper[4820]: I0203 13:26:31.695841 4820 scope.go:117] "RemoveContainer" containerID="069c71e1527e3ec0d572448feb9efcb6041921dec784f111a6ccc2a7e1988376" Feb 03 13:26:32 crc kubenswrapper[4820]: I0203 13:26:32.710552 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398"} Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.170949 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nr45p"] Feb 03 13:26:41 crc kubenswrapper[4820]: E0203 13:26:41.172025 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" containerName="registry-server" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.172049 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" containerName="registry-server" Feb 03 13:26:41 crc kubenswrapper[4820]: E0203 13:26:41.172067 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" containerName="extract-utilities" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.172074 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" containerName="extract-utilities" Feb 03 13:26:41 crc kubenswrapper[4820]: E0203 13:26:41.172112 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" containerName="extract-content" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.172120 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" containerName="extract-content" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.172379 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4679b70-6d4c-47db-96eb-0bc13e2469d8" containerName="registry-server" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.174123 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.198622 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nr45p"] Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.345270 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd8qr\" (UniqueName: \"kubernetes.io/projected/ad97d10d-71b4-42bb-974f-16643101d61c-kube-api-access-qd8qr\") pod \"certified-operators-nr45p\" (UID: \"ad97d10d-71b4-42bb-974f-16643101d61c\") " pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.345828 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad97d10d-71b4-42bb-974f-16643101d61c-utilities\") pod \"certified-operators-nr45p\" (UID: \"ad97d10d-71b4-42bb-974f-16643101d61c\") " pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.345903 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad97d10d-71b4-42bb-974f-16643101d61c-catalog-content\") pod \"certified-operators-nr45p\" (UID: \"ad97d10d-71b4-42bb-974f-16643101d61c\") " pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.448452 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd8qr\" (UniqueName: \"kubernetes.io/projected/ad97d10d-71b4-42bb-974f-16643101d61c-kube-api-access-qd8qr\") pod \"certified-operators-nr45p\" (UID: \"ad97d10d-71b4-42bb-974f-16643101d61c\") " pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.448844 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad97d10d-71b4-42bb-974f-16643101d61c-utilities\") pod \"certified-operators-nr45p\" (UID: \"ad97d10d-71b4-42bb-974f-16643101d61c\") " pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.448921 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad97d10d-71b4-42bb-974f-16643101d61c-catalog-content\") pod \"certified-operators-nr45p\" (UID: \"ad97d10d-71b4-42bb-974f-16643101d61c\") " pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.449333 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad97d10d-71b4-42bb-974f-16643101d61c-utilities\") pod \"certified-operators-nr45p\" (UID: \"ad97d10d-71b4-42bb-974f-16643101d61c\") " pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.449481 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad97d10d-71b4-42bb-974f-16643101d61c-catalog-content\") pod \"certified-operators-nr45p\" (UID: \"ad97d10d-71b4-42bb-974f-16643101d61c\") " pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.472427 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd8qr\" (UniqueName: \"kubernetes.io/projected/ad97d10d-71b4-42bb-974f-16643101d61c-kube-api-access-qd8qr\") pod \"certified-operators-nr45p\" (UID: \"ad97d10d-71b4-42bb-974f-16643101d61c\") " pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:41 crc kubenswrapper[4820]: I0203 13:26:41.501775 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:42 crc kubenswrapper[4820]: I0203 13:26:42.285239 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nr45p"] Feb 03 13:26:42 crc kubenswrapper[4820]: I0203 13:26:42.948555 4820 generic.go:334] "Generic (PLEG): container finished" podID="ad97d10d-71b4-42bb-974f-16643101d61c" containerID="a6d04e1bb0f8886e2cd264ed1c16679c20bc68aa2c5e3ebb6d041ca25964d8cc" exitCode=0 Feb 03 13:26:42 crc kubenswrapper[4820]: I0203 13:26:42.948658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nr45p" event={"ID":"ad97d10d-71b4-42bb-974f-16643101d61c","Type":"ContainerDied","Data":"a6d04e1bb0f8886e2cd264ed1c16679c20bc68aa2c5e3ebb6d041ca25964d8cc"} Feb 03 13:26:42 crc kubenswrapper[4820]: I0203 13:26:42.948875 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nr45p" event={"ID":"ad97d10d-71b4-42bb-974f-16643101d61c","Type":"ContainerStarted","Data":"f2cf3ae18e34478d7d26effd9b2179f84dff68f4167ef09ba06d48ecd9963402"} Feb 03 13:26:43 crc kubenswrapper[4820]: I0203 13:26:43.982159 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nr45p" event={"ID":"ad97d10d-71b4-42bb-974f-16643101d61c","Type":"ContainerStarted","Data":"cf9c63a7cbdddb129868bc066559bafc6f559a5213c356988543655fb12c4aeb"} Feb 03 13:26:44 crc kubenswrapper[4820]: I0203 13:26:44.996461 4820 generic.go:334] "Generic (PLEG): container finished" podID="ad97d10d-71b4-42bb-974f-16643101d61c" containerID="cf9c63a7cbdddb129868bc066559bafc6f559a5213c356988543655fb12c4aeb" exitCode=0 Feb 03 13:26:44 crc kubenswrapper[4820]: I0203 13:26:44.996565 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nr45p" event={"ID":"ad97d10d-71b4-42bb-974f-16643101d61c","Type":"ContainerDied","Data":"cf9c63a7cbdddb129868bc066559bafc6f559a5213c356988543655fb12c4aeb"} Feb 03 13:26:46 crc kubenswrapper[4820]: I0203 13:26:46.018881 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nr45p" event={"ID":"ad97d10d-71b4-42bb-974f-16643101d61c","Type":"ContainerStarted","Data":"7c9d9964eb565369aadd8f82f742c027549c2e60e9827e68d4f75e3127e0ab8c"} Feb 03 13:26:46 crc kubenswrapper[4820]: I0203 13:26:46.063458 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nr45p" podStartSLOduration=2.6022989020000002 podStartE2EDuration="5.063426335s" podCreationTimestamp="2026-02-03 13:26:41 +0000 UTC" firstStartedPulling="2026-02-03 13:26:42.950498263 +0000 UTC m=+4920.473574127" lastFinishedPulling="2026-02-03 13:26:45.411625696 +0000 UTC m=+4922.934701560" observedRunningTime="2026-02-03 13:26:46.045749114 +0000 UTC m=+4923.568824988" watchObservedRunningTime="2026-02-03 13:26:46.063426335 +0000 UTC m=+4923.586502199" Feb 03 13:26:51 crc kubenswrapper[4820]: I0203 13:26:51.502127 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:51 crc kubenswrapper[4820]: I0203 13:26:51.502712 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:51 crc kubenswrapper[4820]: I0203 13:26:51.992782 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:52 crc kubenswrapper[4820]: I0203 13:26:52.130505 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:52 crc kubenswrapper[4820]: I0203 13:26:52.237610 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nr45p"] Feb 03 13:26:54 crc kubenswrapper[4820]: I0203 13:26:54.088860 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nr45p" podUID="ad97d10d-71b4-42bb-974f-16643101d61c" containerName="registry-server" containerID="cri-o://7c9d9964eb565369aadd8f82f742c027549c2e60e9827e68d4f75e3127e0ab8c" gracePeriod=2 Feb 03 13:26:54 crc kubenswrapper[4820]: I0203 13:26:54.732119 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:54 crc kubenswrapper[4820]: I0203 13:26:54.917096 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd8qr\" (UniqueName: \"kubernetes.io/projected/ad97d10d-71b4-42bb-974f-16643101d61c-kube-api-access-qd8qr\") pod \"ad97d10d-71b4-42bb-974f-16643101d61c\" (UID: \"ad97d10d-71b4-42bb-974f-16643101d61c\") " Feb 03 13:26:54 crc kubenswrapper[4820]: I0203 13:26:54.917608 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad97d10d-71b4-42bb-974f-16643101d61c-utilities\") pod \"ad97d10d-71b4-42bb-974f-16643101d61c\" (UID: \"ad97d10d-71b4-42bb-974f-16643101d61c\") " Feb 03 13:26:54 crc kubenswrapper[4820]: I0203 13:26:54.917657 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad97d10d-71b4-42bb-974f-16643101d61c-catalog-content\") pod \"ad97d10d-71b4-42bb-974f-16643101d61c\" (UID: \"ad97d10d-71b4-42bb-974f-16643101d61c\") " Feb 03 13:26:54 crc kubenswrapper[4820]: I0203 13:26:54.918355 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad97d10d-71b4-42bb-974f-16643101d61c-utilities" (OuterVolumeSpecName: "utilities") pod "ad97d10d-71b4-42bb-974f-16643101d61c" (UID: "ad97d10d-71b4-42bb-974f-16643101d61c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:26:54 crc kubenswrapper[4820]: I0203 13:26:54.923241 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad97d10d-71b4-42bb-974f-16643101d61c-kube-api-access-qd8qr" (OuterVolumeSpecName: "kube-api-access-qd8qr") pod "ad97d10d-71b4-42bb-974f-16643101d61c" (UID: "ad97d10d-71b4-42bb-974f-16643101d61c"). InnerVolumeSpecName "kube-api-access-qd8qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:26:54 crc kubenswrapper[4820]: I0203 13:26:54.969382 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad97d10d-71b4-42bb-974f-16643101d61c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad97d10d-71b4-42bb-974f-16643101d61c" (UID: "ad97d10d-71b4-42bb-974f-16643101d61c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.222408 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad97d10d-71b4-42bb-974f-16643101d61c-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.223504 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad97d10d-71b4-42bb-974f-16643101d61c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.223535 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd8qr\" (UniqueName: \"kubernetes.io/projected/ad97d10d-71b4-42bb-974f-16643101d61c-kube-api-access-qd8qr\") on node \"crc\" DevicePath \"\"" Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.290212 4820 generic.go:334] "Generic (PLEG): container finished" podID="ad97d10d-71b4-42bb-974f-16643101d61c" containerID="7c9d9964eb565369aadd8f82f742c027549c2e60e9827e68d4f75e3127e0ab8c" exitCode=0 Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.290511 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nr45p" event={"ID":"ad97d10d-71b4-42bb-974f-16643101d61c","Type":"ContainerDied","Data":"7c9d9964eb565369aadd8f82f742c027549c2e60e9827e68d4f75e3127e0ab8c"} Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.290595 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nr45p" event={"ID":"ad97d10d-71b4-42bb-974f-16643101d61c","Type":"ContainerDied","Data":"f2cf3ae18e34478d7d26effd9b2179f84dff68f4167ef09ba06d48ecd9963402"} Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.290672 4820 scope.go:117] "RemoveContainer" containerID="7c9d9964eb565369aadd8f82f742c027549c2e60e9827e68d4f75e3127e0ab8c" Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.290942 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nr45p" Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.376168 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nr45p"] Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.385822 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nr45p"] Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.388087 4820 scope.go:117] "RemoveContainer" containerID="cf9c63a7cbdddb129868bc066559bafc6f559a5213c356988543655fb12c4aeb" Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.469532 4820 scope.go:117] "RemoveContainer" containerID="a6d04e1bb0f8886e2cd264ed1c16679c20bc68aa2c5e3ebb6d041ca25964d8cc" Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.509208 4820 scope.go:117] "RemoveContainer" containerID="7c9d9964eb565369aadd8f82f742c027549c2e60e9827e68d4f75e3127e0ab8c" Feb 03 13:26:55 crc kubenswrapper[4820]: E0203 13:26:55.509923 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c9d9964eb565369aadd8f82f742c027549c2e60e9827e68d4f75e3127e0ab8c\": container with ID starting with 7c9d9964eb565369aadd8f82f742c027549c2e60e9827e68d4f75e3127e0ab8c not found: ID does not exist" containerID="7c9d9964eb565369aadd8f82f742c027549c2e60e9827e68d4f75e3127e0ab8c" Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.509964 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c9d9964eb565369aadd8f82f742c027549c2e60e9827e68d4f75e3127e0ab8c"} err="failed to get container status \"7c9d9964eb565369aadd8f82f742c027549c2e60e9827e68d4f75e3127e0ab8c\": rpc error: code = NotFound desc = could not find container \"7c9d9964eb565369aadd8f82f742c027549c2e60e9827e68d4f75e3127e0ab8c\": container with ID starting with 7c9d9964eb565369aadd8f82f742c027549c2e60e9827e68d4f75e3127e0ab8c not found: ID does not exist" Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.509995 4820 scope.go:117] "RemoveContainer" containerID="cf9c63a7cbdddb129868bc066559bafc6f559a5213c356988543655fb12c4aeb" Feb 03 13:26:55 crc kubenswrapper[4820]: E0203 13:26:55.510561 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf9c63a7cbdddb129868bc066559bafc6f559a5213c356988543655fb12c4aeb\": container with ID starting with cf9c63a7cbdddb129868bc066559bafc6f559a5213c356988543655fb12c4aeb not found: ID does not exist" containerID="cf9c63a7cbdddb129868bc066559bafc6f559a5213c356988543655fb12c4aeb" Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.510622 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf9c63a7cbdddb129868bc066559bafc6f559a5213c356988543655fb12c4aeb"} err="failed to get container status \"cf9c63a7cbdddb129868bc066559bafc6f559a5213c356988543655fb12c4aeb\": rpc error: code = NotFound desc = could not find container \"cf9c63a7cbdddb129868bc066559bafc6f559a5213c356988543655fb12c4aeb\": container with ID starting with cf9c63a7cbdddb129868bc066559bafc6f559a5213c356988543655fb12c4aeb not found: ID does not exist" Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.510662 4820 scope.go:117] "RemoveContainer" containerID="a6d04e1bb0f8886e2cd264ed1c16679c20bc68aa2c5e3ebb6d041ca25964d8cc" Feb 03 13:26:55 crc kubenswrapper[4820]: E0203 13:26:55.511137 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d04e1bb0f8886e2cd264ed1c16679c20bc68aa2c5e3ebb6d041ca25964d8cc\": container with ID starting with a6d04e1bb0f8886e2cd264ed1c16679c20bc68aa2c5e3ebb6d041ca25964d8cc not found: ID does not exist" containerID="a6d04e1bb0f8886e2cd264ed1c16679c20bc68aa2c5e3ebb6d041ca25964d8cc" Feb 03 13:26:55 crc kubenswrapper[4820]: I0203 13:26:55.511161 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d04e1bb0f8886e2cd264ed1c16679c20bc68aa2c5e3ebb6d041ca25964d8cc"} err="failed to get container status \"a6d04e1bb0f8886e2cd264ed1c16679c20bc68aa2c5e3ebb6d041ca25964d8cc\": rpc error: code = NotFound desc = could not find container \"a6d04e1bb0f8886e2cd264ed1c16679c20bc68aa2c5e3ebb6d041ca25964d8cc\": container with ID starting with a6d04e1bb0f8886e2cd264ed1c16679c20bc68aa2c5e3ebb6d041ca25964d8cc not found: ID does not exist" Feb 03 13:26:57 crc kubenswrapper[4820]: I0203 13:26:57.158668 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad97d10d-71b4-42bb-974f-16643101d61c" path="/var/lib/kubelet/pods/ad97d10d-71b4-42bb-974f-16643101d61c/volumes" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.193053 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8fvdj"] Feb 03 13:28:21 crc kubenswrapper[4820]: E0203 13:28:21.194668 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad97d10d-71b4-42bb-974f-16643101d61c" containerName="extract-content" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.194715 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad97d10d-71b4-42bb-974f-16643101d61c" containerName="extract-content" Feb 03 13:28:21 crc kubenswrapper[4820]: E0203 13:28:21.194764 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad97d10d-71b4-42bb-974f-16643101d61c" containerName="registry-server" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.194776 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad97d10d-71b4-42bb-974f-16643101d61c" containerName="registry-server" Feb 03 13:28:21 crc kubenswrapper[4820]: E0203 13:28:21.194840 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad97d10d-71b4-42bb-974f-16643101d61c" containerName="extract-utilities" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.194856 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad97d10d-71b4-42bb-974f-16643101d61c" containerName="extract-utilities" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.195346 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad97d10d-71b4-42bb-974f-16643101d61c" containerName="registry-server" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.198297 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.206840 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fvdj"] Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.242438 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrg6b\" (UniqueName: \"kubernetes.io/projected/2d2f8d61-4857-4932-8d9e-52fab99eff24-kube-api-access-mrg6b\") pod \"redhat-marketplace-8fvdj\" (UID: \"2d2f8d61-4857-4932-8d9e-52fab99eff24\") " pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.242527 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d2f8d61-4857-4932-8d9e-52fab99eff24-utilities\") pod \"redhat-marketplace-8fvdj\" (UID: \"2d2f8d61-4857-4932-8d9e-52fab99eff24\") " pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.242583 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d2f8d61-4857-4932-8d9e-52fab99eff24-catalog-content\") pod \"redhat-marketplace-8fvdj\" (UID: \"2d2f8d61-4857-4932-8d9e-52fab99eff24\") " pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.345427 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrg6b\" (UniqueName: \"kubernetes.io/projected/2d2f8d61-4857-4932-8d9e-52fab99eff24-kube-api-access-mrg6b\") pod \"redhat-marketplace-8fvdj\" (UID: \"2d2f8d61-4857-4932-8d9e-52fab99eff24\") " pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.345512 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d2f8d61-4857-4932-8d9e-52fab99eff24-utilities\") pod \"redhat-marketplace-8fvdj\" (UID: \"2d2f8d61-4857-4932-8d9e-52fab99eff24\") " pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.345551 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d2f8d61-4857-4932-8d9e-52fab99eff24-catalog-content\") pod \"redhat-marketplace-8fvdj\" (UID: \"2d2f8d61-4857-4932-8d9e-52fab99eff24\") " pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.346089 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d2f8d61-4857-4932-8d9e-52fab99eff24-utilities\") pod \"redhat-marketplace-8fvdj\" (UID: \"2d2f8d61-4857-4932-8d9e-52fab99eff24\") " pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.346121 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d2f8d61-4857-4932-8d9e-52fab99eff24-catalog-content\") pod \"redhat-marketplace-8fvdj\" (UID: \"2d2f8d61-4857-4932-8d9e-52fab99eff24\") " pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.368217 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrg6b\" (UniqueName: \"kubernetes.io/projected/2d2f8d61-4857-4932-8d9e-52fab99eff24-kube-api-access-mrg6b\") pod \"redhat-marketplace-8fvdj\" (UID: \"2d2f8d61-4857-4932-8d9e-52fab99eff24\") " pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:21 crc kubenswrapper[4820]: I0203 13:28:21.527084 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:22 crc kubenswrapper[4820]: I0203 13:28:22.034541 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fvdj"] Feb 03 13:28:23 crc kubenswrapper[4820]: I0203 13:28:23.297719 4820 generic.go:334] "Generic (PLEG): container finished" podID="2d2f8d61-4857-4932-8d9e-52fab99eff24" containerID="7ff9d9117152a22c966420626d5254a37994262972e436bc95c49ca4be7167b0" exitCode=0 Feb 03 13:28:23 crc kubenswrapper[4820]: I0203 13:28:23.297821 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fvdj" event={"ID":"2d2f8d61-4857-4932-8d9e-52fab99eff24","Type":"ContainerDied","Data":"7ff9d9117152a22c966420626d5254a37994262972e436bc95c49ca4be7167b0"} Feb 03 13:28:23 crc kubenswrapper[4820]: I0203 13:28:23.299291 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fvdj" event={"ID":"2d2f8d61-4857-4932-8d9e-52fab99eff24","Type":"ContainerStarted","Data":"1cac6b2230e147fbdc7baca29a9c979bec065ce64cb70a02c1274bf6767d83ab"} Feb 03 13:28:23 crc kubenswrapper[4820]: I0203 13:28:23.300125 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:28:24 crc kubenswrapper[4820]: I0203 13:28:24.309745 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fvdj" event={"ID":"2d2f8d61-4857-4932-8d9e-52fab99eff24","Type":"ContainerStarted","Data":"f4730bcf6b7fbe81d3fa984a5c7175e2e16850c836ee2172ec84138d9ccdda18"} Feb 03 13:28:25 crc kubenswrapper[4820]: I0203 13:28:25.322794 4820 generic.go:334] "Generic (PLEG): container finished" podID="2d2f8d61-4857-4932-8d9e-52fab99eff24" containerID="f4730bcf6b7fbe81d3fa984a5c7175e2e16850c836ee2172ec84138d9ccdda18" exitCode=0 Feb 03 13:28:25 crc kubenswrapper[4820]: I0203 13:28:25.322847 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fvdj" event={"ID":"2d2f8d61-4857-4932-8d9e-52fab99eff24","Type":"ContainerDied","Data":"f4730bcf6b7fbe81d3fa984a5c7175e2e16850c836ee2172ec84138d9ccdda18"} Feb 03 13:28:26 crc kubenswrapper[4820]: I0203 13:28:26.353611 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fvdj" event={"ID":"2d2f8d61-4857-4932-8d9e-52fab99eff24","Type":"ContainerStarted","Data":"af3124c617ff50c26141ad090e3d1af075632f95046f50a7ba981399efb6bc72"} Feb 03 13:28:26 crc kubenswrapper[4820]: I0203 13:28:26.376078 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8fvdj" podStartSLOduration=2.559318203 podStartE2EDuration="5.376042712s" podCreationTimestamp="2026-02-03 13:28:21 +0000 UTC" firstStartedPulling="2026-02-03 13:28:23.299547238 +0000 UTC m=+5020.822623092" lastFinishedPulling="2026-02-03 13:28:26.116271737 +0000 UTC m=+5023.639347601" observedRunningTime="2026-02-03 13:28:26.375327702 +0000 UTC m=+5023.898403586" watchObservedRunningTime="2026-02-03 13:28:26.376042712 +0000 UTC m=+5023.899118576" Feb 03 13:28:31 crc kubenswrapper[4820]: I0203 13:28:31.365922 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:28:31 crc kubenswrapper[4820]: I0203 13:28:31.366486 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:28:31 crc kubenswrapper[4820]: I0203 13:28:31.527877 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:31 crc kubenswrapper[4820]: I0203 13:28:31.527980 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:31 crc kubenswrapper[4820]: I0203 13:28:31.580820 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:32 crc kubenswrapper[4820]: I0203 13:28:32.975657 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:33 crc kubenswrapper[4820]: I0203 13:28:33.043365 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6mwff"] Feb 03 13:28:33 crc kubenswrapper[4820]: I0203 13:28:33.045821 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:33 crc kubenswrapper[4820]: I0203 13:28:33.055485 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6mwff"] Feb 03 13:28:33 crc kubenswrapper[4820]: I0203 13:28:33.083685 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-utilities\") pod \"community-operators-6mwff\" (UID: \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\") " pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:33 crc kubenswrapper[4820]: I0203 13:28:33.083754 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55jmp\" (UniqueName: \"kubernetes.io/projected/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-kube-api-access-55jmp\") pod \"community-operators-6mwff\" (UID: \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\") " pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:33 crc kubenswrapper[4820]: I0203 13:28:33.083806 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-catalog-content\") pod \"community-operators-6mwff\" (UID: \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\") " pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:33 crc kubenswrapper[4820]: I0203 13:28:33.185372 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-utilities\") pod \"community-operators-6mwff\" (UID: \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\") " pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:33 crc kubenswrapper[4820]: I0203 13:28:33.185450 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55jmp\" (UniqueName: \"kubernetes.io/projected/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-kube-api-access-55jmp\") pod \"community-operators-6mwff\" (UID: \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\") " pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:33 crc kubenswrapper[4820]: I0203 13:28:33.185765 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-catalog-content\") pod \"community-operators-6mwff\" (UID: \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\") " pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:33 crc kubenswrapper[4820]: I0203 13:28:33.185954 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-utilities\") pod \"community-operators-6mwff\" (UID: \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\") " pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:33 crc kubenswrapper[4820]: I0203 13:28:33.186376 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-catalog-content\") pod \"community-operators-6mwff\" (UID: \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\") " pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:33 crc kubenswrapper[4820]: I0203 13:28:33.211526 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55jmp\" (UniqueName: \"kubernetes.io/projected/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-kube-api-access-55jmp\") pod \"community-operators-6mwff\" (UID: \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\") " pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:33 crc kubenswrapper[4820]: I0203 13:28:33.369418 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:33 crc kubenswrapper[4820]: I0203 13:28:33.930496 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6mwff"] Feb 03 13:28:34 crc kubenswrapper[4820]: I0203 13:28:34.457674 4820 generic.go:334] "Generic (PLEG): container finished" podID="7ccb901e-2e35-48c8-9f03-66bff3ac44ea" containerID="b6c1c20a2fc15c90b2cb40ae720ac5ff9be778b723d4c134c5d9c618ae4c2aef" exitCode=0 Feb 03 13:28:34 crc kubenswrapper[4820]: I0203 13:28:34.457767 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mwff" event={"ID":"7ccb901e-2e35-48c8-9f03-66bff3ac44ea","Type":"ContainerDied","Data":"b6c1c20a2fc15c90b2cb40ae720ac5ff9be778b723d4c134c5d9c618ae4c2aef"} Feb 03 13:28:34 crc kubenswrapper[4820]: I0203 13:28:34.458032 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mwff" event={"ID":"7ccb901e-2e35-48c8-9f03-66bff3ac44ea","Type":"ContainerStarted","Data":"8d2f7015436aa60cfeca24aba3f8876be45f9e95dfafce34ad74bc033663105e"} Feb 03 13:28:35 crc kubenswrapper[4820]: I0203 13:28:35.480342 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mwff" event={"ID":"7ccb901e-2e35-48c8-9f03-66bff3ac44ea","Type":"ContainerStarted","Data":"28440d5038f3842e78ef53de892546b6aa9e0edcc8e13e43481d548684fd0574"} Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.002767 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fvdj"] Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.003405 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8fvdj" podUID="2d2f8d61-4857-4932-8d9e-52fab99eff24" containerName="registry-server" containerID="cri-o://af3124c617ff50c26141ad090e3d1af075632f95046f50a7ba981399efb6bc72" gracePeriod=2 Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.503308 4820 generic.go:334] "Generic (PLEG): container finished" podID="2d2f8d61-4857-4932-8d9e-52fab99eff24" containerID="af3124c617ff50c26141ad090e3d1af075632f95046f50a7ba981399efb6bc72" exitCode=0 Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.503610 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fvdj" event={"ID":"2d2f8d61-4857-4932-8d9e-52fab99eff24","Type":"ContainerDied","Data":"af3124c617ff50c26141ad090e3d1af075632f95046f50a7ba981399efb6bc72"} Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.503755 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fvdj" event={"ID":"2d2f8d61-4857-4932-8d9e-52fab99eff24","Type":"ContainerDied","Data":"1cac6b2230e147fbdc7baca29a9c979bec065ce64cb70a02c1274bf6767d83ab"} Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.503795 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cac6b2230e147fbdc7baca29a9c979bec065ce64cb70a02c1274bf6767d83ab" Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.507311 4820 generic.go:334] "Generic (PLEG): container finished" podID="7ccb901e-2e35-48c8-9f03-66bff3ac44ea" containerID="28440d5038f3842e78ef53de892546b6aa9e0edcc8e13e43481d548684fd0574" exitCode=0 Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.507380 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mwff" event={"ID":"7ccb901e-2e35-48c8-9f03-66bff3ac44ea","Type":"ContainerDied","Data":"28440d5038f3842e78ef53de892546b6aa9e0edcc8e13e43481d548684fd0574"} Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.541745 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.591449 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d2f8d61-4857-4932-8d9e-52fab99eff24-catalog-content\") pod \"2d2f8d61-4857-4932-8d9e-52fab99eff24\" (UID: \"2d2f8d61-4857-4932-8d9e-52fab99eff24\") " Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.591694 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrg6b\" (UniqueName: \"kubernetes.io/projected/2d2f8d61-4857-4932-8d9e-52fab99eff24-kube-api-access-mrg6b\") pod \"2d2f8d61-4857-4932-8d9e-52fab99eff24\" (UID: \"2d2f8d61-4857-4932-8d9e-52fab99eff24\") " Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.591771 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d2f8d61-4857-4932-8d9e-52fab99eff24-utilities\") pod \"2d2f8d61-4857-4932-8d9e-52fab99eff24\" (UID: \"2d2f8d61-4857-4932-8d9e-52fab99eff24\") " Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.592587 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d2f8d61-4857-4932-8d9e-52fab99eff24-utilities" (OuterVolumeSpecName: "utilities") pod "2d2f8d61-4857-4932-8d9e-52fab99eff24" (UID: "2d2f8d61-4857-4932-8d9e-52fab99eff24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.597287 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d2f8d61-4857-4932-8d9e-52fab99eff24-kube-api-access-mrg6b" (OuterVolumeSpecName: "kube-api-access-mrg6b") pod "2d2f8d61-4857-4932-8d9e-52fab99eff24" (UID: "2d2f8d61-4857-4932-8d9e-52fab99eff24"). InnerVolumeSpecName "kube-api-access-mrg6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.615318 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d2f8d61-4857-4932-8d9e-52fab99eff24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d2f8d61-4857-4932-8d9e-52fab99eff24" (UID: "2d2f8d61-4857-4932-8d9e-52fab99eff24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.695943 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d2f8d61-4857-4932-8d9e-52fab99eff24-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.696025 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d2f8d61-4857-4932-8d9e-52fab99eff24-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:28:37 crc kubenswrapper[4820]: I0203 13:28:37.696041 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrg6b\" (UniqueName: \"kubernetes.io/projected/2d2f8d61-4857-4932-8d9e-52fab99eff24-kube-api-access-mrg6b\") on node \"crc\" DevicePath \"\"" Feb 03 13:28:38 crc kubenswrapper[4820]: I0203 13:28:38.520450 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mwff" event={"ID":"7ccb901e-2e35-48c8-9f03-66bff3ac44ea","Type":"ContainerStarted","Data":"a200867b7d4863d7af58c20cf6f76b2db13fed97d0a61db301b360e661a6fc27"} Feb 03 13:28:38 crc kubenswrapper[4820]: I0203 13:28:38.520488 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fvdj" Feb 03 13:28:38 crc kubenswrapper[4820]: I0203 13:28:38.547868 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6mwff" podStartSLOduration=2.903230162 podStartE2EDuration="6.547838992s" podCreationTimestamp="2026-02-03 13:28:32 +0000 UTC" firstStartedPulling="2026-02-03 13:28:34.460630307 +0000 UTC m=+5031.983706171" lastFinishedPulling="2026-02-03 13:28:38.105239137 +0000 UTC m=+5035.628315001" observedRunningTime="2026-02-03 13:28:38.543551357 +0000 UTC m=+5036.066627241" watchObservedRunningTime="2026-02-03 13:28:38.547838992 +0000 UTC m=+5036.070914866" Feb 03 13:28:38 crc kubenswrapper[4820]: I0203 13:28:38.577022 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fvdj"] Feb 03 13:28:38 crc kubenswrapper[4820]: I0203 13:28:38.586219 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fvdj"] Feb 03 13:28:39 crc kubenswrapper[4820]: I0203 13:28:39.158685 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d2f8d61-4857-4932-8d9e-52fab99eff24" path="/var/lib/kubelet/pods/2d2f8d61-4857-4932-8d9e-52fab99eff24/volumes" Feb 03 13:28:43 crc kubenswrapper[4820]: I0203 13:28:43.371428 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:43 crc kubenswrapper[4820]: I0203 13:28:43.372012 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:43 crc kubenswrapper[4820]: I0203 13:28:43.424460 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:43 crc kubenswrapper[4820]: I0203 13:28:43.630685 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:43 crc kubenswrapper[4820]: I0203 13:28:43.690380 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6mwff"] Feb 03 13:28:45 crc kubenswrapper[4820]: I0203 13:28:45.594101 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6mwff" podUID="7ccb901e-2e35-48c8-9f03-66bff3ac44ea" containerName="registry-server" containerID="cri-o://a200867b7d4863d7af58c20cf6f76b2db13fed97d0a61db301b360e661a6fc27" gracePeriod=2 Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.145613 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.293235 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55jmp\" (UniqueName: \"kubernetes.io/projected/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-kube-api-access-55jmp\") pod \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\" (UID: \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\") " Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.293489 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-utilities\") pod \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\" (UID: \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\") " Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.293980 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-catalog-content\") pod \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\" (UID: \"7ccb901e-2e35-48c8-9f03-66bff3ac44ea\") " Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.294485 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-utilities" (OuterVolumeSpecName: "utilities") pod "7ccb901e-2e35-48c8-9f03-66bff3ac44ea" (UID: "7ccb901e-2e35-48c8-9f03-66bff3ac44ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.295084 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.301126 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-kube-api-access-55jmp" (OuterVolumeSpecName: "kube-api-access-55jmp") pod "7ccb901e-2e35-48c8-9f03-66bff3ac44ea" (UID: "7ccb901e-2e35-48c8-9f03-66bff3ac44ea"). InnerVolumeSpecName "kube-api-access-55jmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.345419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ccb901e-2e35-48c8-9f03-66bff3ac44ea" (UID: "7ccb901e-2e35-48c8-9f03-66bff3ac44ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.397500 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55jmp\" (UniqueName: \"kubernetes.io/projected/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-kube-api-access-55jmp\") on node \"crc\" DevicePath \"\"" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.397569 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ccb901e-2e35-48c8-9f03-66bff3ac44ea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.604558 4820 generic.go:334] "Generic (PLEG): container finished" podID="7ccb901e-2e35-48c8-9f03-66bff3ac44ea" containerID="a200867b7d4863d7af58c20cf6f76b2db13fed97d0a61db301b360e661a6fc27" exitCode=0 Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.604618 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mwff" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.604612 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mwff" event={"ID":"7ccb901e-2e35-48c8-9f03-66bff3ac44ea","Type":"ContainerDied","Data":"a200867b7d4863d7af58c20cf6f76b2db13fed97d0a61db301b360e661a6fc27"} Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.604754 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mwff" event={"ID":"7ccb901e-2e35-48c8-9f03-66bff3ac44ea","Type":"ContainerDied","Data":"8d2f7015436aa60cfeca24aba3f8876be45f9e95dfafce34ad74bc033663105e"} Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.604780 4820 scope.go:117] "RemoveContainer" containerID="a200867b7d4863d7af58c20cf6f76b2db13fed97d0a61db301b360e661a6fc27" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.624849 4820 scope.go:117] "RemoveContainer" containerID="28440d5038f3842e78ef53de892546b6aa9e0edcc8e13e43481d548684fd0574" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.640229 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6mwff"] Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.649567 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6mwff"] Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.675091 4820 scope.go:117] "RemoveContainer" containerID="b6c1c20a2fc15c90b2cb40ae720ac5ff9be778b723d4c134c5d9c618ae4c2aef" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.714179 4820 scope.go:117] "RemoveContainer" containerID="a200867b7d4863d7af58c20cf6f76b2db13fed97d0a61db301b360e661a6fc27" Feb 03 13:28:46 crc kubenswrapper[4820]: E0203 13:28:46.714586 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a200867b7d4863d7af58c20cf6f76b2db13fed97d0a61db301b360e661a6fc27\": container with ID starting with a200867b7d4863d7af58c20cf6f76b2db13fed97d0a61db301b360e661a6fc27 not found: ID does not exist" containerID="a200867b7d4863d7af58c20cf6f76b2db13fed97d0a61db301b360e661a6fc27" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.714629 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a200867b7d4863d7af58c20cf6f76b2db13fed97d0a61db301b360e661a6fc27"} err="failed to get container status \"a200867b7d4863d7af58c20cf6f76b2db13fed97d0a61db301b360e661a6fc27\": rpc error: code = NotFound desc = could not find container \"a200867b7d4863d7af58c20cf6f76b2db13fed97d0a61db301b360e661a6fc27\": container with ID starting with a200867b7d4863d7af58c20cf6f76b2db13fed97d0a61db301b360e661a6fc27 not found: ID does not exist" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.714654 4820 scope.go:117] "RemoveContainer" containerID="28440d5038f3842e78ef53de892546b6aa9e0edcc8e13e43481d548684fd0574" Feb 03 13:28:46 crc kubenswrapper[4820]: E0203 13:28:46.714913 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28440d5038f3842e78ef53de892546b6aa9e0edcc8e13e43481d548684fd0574\": container with ID starting with 28440d5038f3842e78ef53de892546b6aa9e0edcc8e13e43481d548684fd0574 not found: ID does not exist" containerID="28440d5038f3842e78ef53de892546b6aa9e0edcc8e13e43481d548684fd0574" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.714949 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28440d5038f3842e78ef53de892546b6aa9e0edcc8e13e43481d548684fd0574"} err="failed to get container status \"28440d5038f3842e78ef53de892546b6aa9e0edcc8e13e43481d548684fd0574\": rpc error: code = NotFound desc = could not find container \"28440d5038f3842e78ef53de892546b6aa9e0edcc8e13e43481d548684fd0574\": container with ID starting with 28440d5038f3842e78ef53de892546b6aa9e0edcc8e13e43481d548684fd0574 not found: ID does not exist" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.714968 4820 scope.go:117] "RemoveContainer" containerID="b6c1c20a2fc15c90b2cb40ae720ac5ff9be778b723d4c134c5d9c618ae4c2aef" Feb 03 13:28:46 crc kubenswrapper[4820]: E0203 13:28:46.715344 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6c1c20a2fc15c90b2cb40ae720ac5ff9be778b723d4c134c5d9c618ae4c2aef\": container with ID starting with b6c1c20a2fc15c90b2cb40ae720ac5ff9be778b723d4c134c5d9c618ae4c2aef not found: ID does not exist" containerID="b6c1c20a2fc15c90b2cb40ae720ac5ff9be778b723d4c134c5d9c618ae4c2aef" Feb 03 13:28:46 crc kubenswrapper[4820]: I0203 13:28:46.715412 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6c1c20a2fc15c90b2cb40ae720ac5ff9be778b723d4c134c5d9c618ae4c2aef"} err="failed to get container status \"b6c1c20a2fc15c90b2cb40ae720ac5ff9be778b723d4c134c5d9c618ae4c2aef\": rpc error: code = NotFound desc = could not find container \"b6c1c20a2fc15c90b2cb40ae720ac5ff9be778b723d4c134c5d9c618ae4c2aef\": container with ID starting with b6c1c20a2fc15c90b2cb40ae720ac5ff9be778b723d4c134c5d9c618ae4c2aef not found: ID does not exist" Feb 03 13:28:47 crc kubenswrapper[4820]: I0203 13:28:47.155143 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ccb901e-2e35-48c8-9f03-66bff3ac44ea" path="/var/lib/kubelet/pods/7ccb901e-2e35-48c8-9f03-66bff3ac44ea/volumes" Feb 03 13:29:01 crc kubenswrapper[4820]: I0203 13:29:01.365641 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:29:01 crc kubenswrapper[4820]: I0203 13:29:01.366278 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:29:31 crc kubenswrapper[4820]: I0203 13:29:31.366168 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:29:31 crc kubenswrapper[4820]: I0203 13:29:31.366788 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:29:31 crc kubenswrapper[4820]: I0203 13:29:31.366903 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 13:29:31 crc kubenswrapper[4820]: I0203 13:29:31.367987 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:29:31 crc kubenswrapper[4820]: I0203 13:29:31.368060 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" gracePeriod=600 Feb 03 13:29:46 crc kubenswrapper[4820]: I0203 13:29:46.677692 4820 trace.go:236] Trace[1972439183]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (03-Feb-2026 13:29:27.216) (total time: 19460ms): Feb 03 13:29:46 crc kubenswrapper[4820]: Trace[1972439183]: [19.460948494s] [19.460948494s] END Feb 03 13:29:46 crc kubenswrapper[4820]: E0203 13:29:46.778379 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:29:47 crc kubenswrapper[4820]: I0203 13:29:47.391689 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" exitCode=0 Feb 03 13:29:47 crc kubenswrapper[4820]: I0203 13:29:47.391785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398"} Feb 03 13:29:47 crc kubenswrapper[4820]: I0203 13:29:47.392059 4820 scope.go:117] "RemoveContainer" containerID="6136771e846a33033d96d5790c917029ada03c3fd7ec26d72b93b2c6cf3cc1bc" Feb 03 13:29:47 crc kubenswrapper[4820]: I0203 13:29:47.392828 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:29:47 crc kubenswrapper[4820]: E0203 13:29:47.393127 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.181834 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555"] Feb 03 13:30:00 crc kubenswrapper[4820]: E0203 13:30:00.183223 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2f8d61-4857-4932-8d9e-52fab99eff24" containerName="extract-utilities" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.183256 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2f8d61-4857-4932-8d9e-52fab99eff24" containerName="extract-utilities" Feb 03 13:30:00 crc kubenswrapper[4820]: E0203 13:30:00.183283 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2f8d61-4857-4932-8d9e-52fab99eff24" containerName="extract-content" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.183296 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2f8d61-4857-4932-8d9e-52fab99eff24" containerName="extract-content" Feb 03 13:30:00 crc kubenswrapper[4820]: E0203 13:30:00.183312 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2f8d61-4857-4932-8d9e-52fab99eff24" containerName="registry-server" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.183325 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2f8d61-4857-4932-8d9e-52fab99eff24" containerName="registry-server" Feb 03 13:30:00 crc kubenswrapper[4820]: E0203 13:30:00.183364 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ccb901e-2e35-48c8-9f03-66bff3ac44ea" containerName="extract-content" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.183376 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ccb901e-2e35-48c8-9f03-66bff3ac44ea" containerName="extract-content" Feb 03 13:30:00 crc kubenswrapper[4820]: E0203 13:30:00.183410 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ccb901e-2e35-48c8-9f03-66bff3ac44ea" containerName="registry-server" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.183422 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ccb901e-2e35-48c8-9f03-66bff3ac44ea" containerName="registry-server" Feb 03 13:30:00 crc kubenswrapper[4820]: E0203 13:30:00.183463 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ccb901e-2e35-48c8-9f03-66bff3ac44ea" containerName="extract-utilities" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.183475 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ccb901e-2e35-48c8-9f03-66bff3ac44ea" containerName="extract-utilities" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.183961 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ccb901e-2e35-48c8-9f03-66bff3ac44ea" containerName="registry-server" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.184005 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d2f8d61-4857-4932-8d9e-52fab99eff24" containerName="registry-server" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.185336 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.190504 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.190504 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.215045 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555"] Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.283873 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwgdm\" (UniqueName: \"kubernetes.io/projected/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-kube-api-access-fwgdm\") pod \"collect-profiles-29502090-rz555\" (UID: \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.284077 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-secret-volume\") pod \"collect-profiles-29502090-rz555\" (UID: \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.284124 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-config-volume\") pod \"collect-profiles-29502090-rz555\" (UID: \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.386338 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwgdm\" (UniqueName: \"kubernetes.io/projected/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-kube-api-access-fwgdm\") pod \"collect-profiles-29502090-rz555\" (UID: \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.386488 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-secret-volume\") pod \"collect-profiles-29502090-rz555\" (UID: \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.386519 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-config-volume\") pod \"collect-profiles-29502090-rz555\" (UID: \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.387739 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-config-volume\") pod \"collect-profiles-29502090-rz555\" (UID: \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.404256 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-secret-volume\") pod \"collect-profiles-29502090-rz555\" (UID: \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.407034 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwgdm\" (UniqueName: \"kubernetes.io/projected/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-kube-api-access-fwgdm\") pod \"collect-profiles-29502090-rz555\" (UID: \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" Feb 03 13:30:00 crc kubenswrapper[4820]: I0203 13:30:00.515726 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" Feb 03 13:30:01 crc kubenswrapper[4820]: I0203 13:30:01.954484 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555"] Feb 03 13:30:02 crc kubenswrapper[4820]: I0203 13:30:02.143103 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:30:02 crc kubenswrapper[4820]: E0203 13:30:02.143716 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:30:02 crc kubenswrapper[4820]: I0203 13:30:02.546198 4820 generic.go:334] "Generic (PLEG): container finished" podID="a8292f5a-3be9-4419-b320-d7dfb93ae7a4" containerID="74ae055d42c53c3f7d57531f255c22b2260d70fea07ea235cbd04be79064b468" exitCode=0 Feb 03 13:30:02 crc kubenswrapper[4820]: I0203 13:30:02.546253 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" event={"ID":"a8292f5a-3be9-4419-b320-d7dfb93ae7a4","Type":"ContainerDied","Data":"74ae055d42c53c3f7d57531f255c22b2260d70fea07ea235cbd04be79064b468"} Feb 03 13:30:02 crc kubenswrapper[4820]: I0203 13:30:02.546283 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" event={"ID":"a8292f5a-3be9-4419-b320-d7dfb93ae7a4","Type":"ContainerStarted","Data":"0347983d577b5f86199f700cd3184e997d4098c2f33e816d98ca98bae6c62359"} Feb 03 13:30:04 crc kubenswrapper[4820]: I0203 13:30:04.258711 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" Feb 03 13:30:04 crc kubenswrapper[4820]: I0203 13:30:04.371591 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-config-volume\") pod \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\" (UID: \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\") " Feb 03 13:30:04 crc kubenswrapper[4820]: I0203 13:30:04.371659 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-secret-volume\") pod \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\" (UID: \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\") " Feb 03 13:30:04 crc kubenswrapper[4820]: I0203 13:30:04.371969 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwgdm\" (UniqueName: \"kubernetes.io/projected/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-kube-api-access-fwgdm\") pod \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\" (UID: \"a8292f5a-3be9-4419-b320-d7dfb93ae7a4\") " Feb 03 13:30:04 crc kubenswrapper[4820]: I0203 13:30:04.372668 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-config-volume" (OuterVolumeSpecName: "config-volume") pod "a8292f5a-3be9-4419-b320-d7dfb93ae7a4" (UID: "a8292f5a-3be9-4419-b320-d7dfb93ae7a4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:30:04 crc kubenswrapper[4820]: I0203 13:30:04.378435 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a8292f5a-3be9-4419-b320-d7dfb93ae7a4" (UID: "a8292f5a-3be9-4419-b320-d7dfb93ae7a4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:30:04 crc kubenswrapper[4820]: I0203 13:30:04.378500 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-kube-api-access-fwgdm" (OuterVolumeSpecName: "kube-api-access-fwgdm") pod "a8292f5a-3be9-4419-b320-d7dfb93ae7a4" (UID: "a8292f5a-3be9-4419-b320-d7dfb93ae7a4"). InnerVolumeSpecName "kube-api-access-fwgdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:30:04 crc kubenswrapper[4820]: I0203 13:30:04.475072 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwgdm\" (UniqueName: \"kubernetes.io/projected/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-kube-api-access-fwgdm\") on node \"crc\" DevicePath \"\"" Feb 03 13:30:04 crc kubenswrapper[4820]: I0203 13:30:04.475109 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:30:04 crc kubenswrapper[4820]: I0203 13:30:04.475119 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8292f5a-3be9-4419-b320-d7dfb93ae7a4-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:30:04 crc kubenswrapper[4820]: I0203 13:30:04.566784 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" event={"ID":"a8292f5a-3be9-4419-b320-d7dfb93ae7a4","Type":"ContainerDied","Data":"0347983d577b5f86199f700cd3184e997d4098c2f33e816d98ca98bae6c62359"} Feb 03 13:30:04 crc kubenswrapper[4820]: I0203 13:30:04.566829 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0347983d577b5f86199f700cd3184e997d4098c2f33e816d98ca98bae6c62359" Feb 03 13:30:04 crc kubenswrapper[4820]: I0203 13:30:04.566859 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-rz555" Feb 03 13:30:05 crc kubenswrapper[4820]: I0203 13:30:05.349595 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm"] Feb 03 13:30:05 crc kubenswrapper[4820]: I0203 13:30:05.362091 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502045-tgglm"] Feb 03 13:30:07 crc kubenswrapper[4820]: I0203 13:30:07.168737 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08459177-65bc-4cf2-850b-3d8db214d191" path="/var/lib/kubelet/pods/08459177-65bc-4cf2-850b-3d8db214d191/volumes" Feb 03 13:30:14 crc kubenswrapper[4820]: I0203 13:30:14.147462 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:30:14 crc kubenswrapper[4820]: E0203 13:30:14.148747 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:30:29 crc kubenswrapper[4820]: I0203 13:30:29.146252 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:30:29 crc kubenswrapper[4820]: E0203 13:30:29.146936 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:30:42 crc kubenswrapper[4820]: I0203 13:30:42.142600 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:30:42 crc kubenswrapper[4820]: E0203 13:30:42.143409 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:30:46 crc kubenswrapper[4820]: I0203 13:30:46.743531 4820 scope.go:117] "RemoveContainer" containerID="c2fad438c4736c8b6f67398598140c6ea893222685c54fe567bc3793d381c751" Feb 03 13:30:54 crc kubenswrapper[4820]: I0203 13:30:54.142387 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:30:54 crc kubenswrapper[4820]: E0203 13:30:54.143355 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:31:09 crc kubenswrapper[4820]: I0203 13:31:09.142984 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:31:09 crc kubenswrapper[4820]: E0203 13:31:09.143905 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:31:23 crc kubenswrapper[4820]: I0203 13:31:23.149734 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:31:23 crc kubenswrapper[4820]: E0203 13:31:23.150348 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:31:35 crc kubenswrapper[4820]: I0203 13:31:35.143785 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:31:35 crc kubenswrapper[4820]: E0203 13:31:35.144754 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:31:47 crc kubenswrapper[4820]: I0203 13:31:47.144282 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:31:47 crc kubenswrapper[4820]: E0203 13:31:47.145045 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:32:02 crc kubenswrapper[4820]: I0203 13:32:02.143405 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:32:02 crc kubenswrapper[4820]: E0203 13:32:02.144474 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:32:16 crc kubenswrapper[4820]: I0203 13:32:16.142832 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:32:16 crc kubenswrapper[4820]: E0203 13:32:16.143596 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:32:29 crc kubenswrapper[4820]: I0203 13:32:29.143671 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:32:29 crc kubenswrapper[4820]: E0203 13:32:29.144697 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:32:42 crc kubenswrapper[4820]: I0203 13:32:42.143211 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:32:42 crc kubenswrapper[4820]: E0203 13:32:42.145060 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:32:57 crc kubenswrapper[4820]: I0203 13:32:57.810311 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-855575688d-cl9c5" podUID="ffe7d059-602c-4fbc-bd5e-4c092cc6f3db" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.98:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 13:32:57 crc kubenswrapper[4820]: I0203 13:32:57.939547 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:32:57 crc kubenswrapper[4820]: E0203 13:32:57.941440 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:33:13 crc kubenswrapper[4820]: I0203 13:33:13.151832 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:33:13 crc kubenswrapper[4820]: E0203 13:33:13.152796 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.424773 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d6dlr"] Feb 03 13:33:23 crc kubenswrapper[4820]: E0203 13:33:23.426972 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8292f5a-3be9-4419-b320-d7dfb93ae7a4" containerName="collect-profiles" Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.427020 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8292f5a-3be9-4419-b320-d7dfb93ae7a4" containerName="collect-profiles" Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.427444 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8292f5a-3be9-4419-b320-d7dfb93ae7a4" containerName="collect-profiles" Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.431388 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.444163 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-catalog-content\") pod \"redhat-operators-d6dlr\" (UID: \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\") " pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.444276 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-utilities\") pod \"redhat-operators-d6dlr\" (UID: \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\") " pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.444376 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl4mn\" (UniqueName: \"kubernetes.io/projected/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-kube-api-access-gl4mn\") pod \"redhat-operators-d6dlr\" (UID: \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\") " pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.447788 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d6dlr"] Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.546938 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-utilities\") pod \"redhat-operators-d6dlr\" (UID: \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\") " pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.547052 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl4mn\" (UniqueName: \"kubernetes.io/projected/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-kube-api-access-gl4mn\") pod \"redhat-operators-d6dlr\" (UID: \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\") " pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.547231 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-catalog-content\") pod \"redhat-operators-d6dlr\" (UID: \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\") " pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.547759 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-catalog-content\") pod \"redhat-operators-d6dlr\" (UID: \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\") " pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.549419 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-utilities\") pod \"redhat-operators-d6dlr\" (UID: \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\") " pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.572251 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl4mn\" (UniqueName: \"kubernetes.io/projected/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-kube-api-access-gl4mn\") pod \"redhat-operators-d6dlr\" (UID: \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\") " pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:23 crc kubenswrapper[4820]: I0203 13:33:23.766189 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:24 crc kubenswrapper[4820]: I0203 13:33:24.620313 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d6dlr"] Feb 03 13:33:25 crc kubenswrapper[4820]: I0203 13:33:25.143038 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:33:25 crc kubenswrapper[4820]: E0203 13:33:25.143750 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:33:25 crc kubenswrapper[4820]: I0203 13:33:25.325175 4820 generic.go:334] "Generic (PLEG): container finished" podID="f02ca55c-cf49-48dd-bebc-3fd02f909b5d" containerID="9e1945eb3a9d8843098c6ce0680a5d497bc798d6ca472f894797b2d0bf7df74d" exitCode=0 Feb 03 13:33:25 crc kubenswrapper[4820]: I0203 13:33:25.325225 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6dlr" event={"ID":"f02ca55c-cf49-48dd-bebc-3fd02f909b5d","Type":"ContainerDied","Data":"9e1945eb3a9d8843098c6ce0680a5d497bc798d6ca472f894797b2d0bf7df74d"} Feb 03 13:33:25 crc kubenswrapper[4820]: I0203 13:33:25.325254 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6dlr" event={"ID":"f02ca55c-cf49-48dd-bebc-3fd02f909b5d","Type":"ContainerStarted","Data":"349778535a4b1a12b22f9f6a1996930100be08cb45f7984566d961315167cb18"} Feb 03 13:33:25 crc kubenswrapper[4820]: I0203 13:33:25.327680 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:33:26 crc kubenswrapper[4820]: I0203 13:33:26.339244 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6dlr" event={"ID":"f02ca55c-cf49-48dd-bebc-3fd02f909b5d","Type":"ContainerStarted","Data":"64783a82f0bed9cb721f53df6055f7e876d759a80a2bc1891f16f8844b80209a"} Feb 03 13:33:28 crc kubenswrapper[4820]: I0203 13:33:28.513734 4820 generic.go:334] "Generic (PLEG): container finished" podID="f02ca55c-cf49-48dd-bebc-3fd02f909b5d" containerID="64783a82f0bed9cb721f53df6055f7e876d759a80a2bc1891f16f8844b80209a" exitCode=0 Feb 03 13:33:28 crc kubenswrapper[4820]: I0203 13:33:28.514364 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6dlr" event={"ID":"f02ca55c-cf49-48dd-bebc-3fd02f909b5d","Type":"ContainerDied","Data":"64783a82f0bed9cb721f53df6055f7e876d759a80a2bc1891f16f8844b80209a"} Feb 03 13:33:31 crc kubenswrapper[4820]: I0203 13:33:31.556955 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6dlr" event={"ID":"f02ca55c-cf49-48dd-bebc-3fd02f909b5d","Type":"ContainerStarted","Data":"cc37c85f9d176753817ce8e7487cba78912e3e4c308f80cc75e2c6497a6501e6"} Feb 03 13:33:31 crc kubenswrapper[4820]: I0203 13:33:31.593898 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d6dlr" podStartSLOduration=3.325382682 podStartE2EDuration="8.593858053s" podCreationTimestamp="2026-02-03 13:33:23 +0000 UTC" firstStartedPulling="2026-02-03 13:33:25.327320613 +0000 UTC m=+5322.850396477" lastFinishedPulling="2026-02-03 13:33:30.595795984 +0000 UTC m=+5328.118871848" observedRunningTime="2026-02-03 13:33:31.58330898 +0000 UTC m=+5329.106384864" watchObservedRunningTime="2026-02-03 13:33:31.593858053 +0000 UTC m=+5329.116933917" Feb 03 13:33:33 crc kubenswrapper[4820]: I0203 13:33:33.766864 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:33 crc kubenswrapper[4820]: I0203 13:33:33.767245 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:35 crc kubenswrapper[4820]: I0203 13:33:35.076713 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d6dlr" podUID="f02ca55c-cf49-48dd-bebc-3fd02f909b5d" containerName="registry-server" probeResult="failure" output=< Feb 03 13:33:35 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 13:33:35 crc kubenswrapper[4820]: > Feb 03 13:33:37 crc kubenswrapper[4820]: I0203 13:33:37.143511 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:33:37 crc kubenswrapper[4820]: E0203 13:33:37.144480 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:33:44 crc kubenswrapper[4820]: I0203 13:33:44.042998 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:44 crc kubenswrapper[4820]: I0203 13:33:44.096607 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:44 crc kubenswrapper[4820]: I0203 13:33:44.289432 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d6dlr"] Feb 03 13:33:45 crc kubenswrapper[4820]: I0203 13:33:45.790473 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d6dlr" podUID="f02ca55c-cf49-48dd-bebc-3fd02f909b5d" containerName="registry-server" containerID="cri-o://cc37c85f9d176753817ce8e7487cba78912e3e4c308f80cc75e2c6497a6501e6" gracePeriod=2 Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.408236 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.565765 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl4mn\" (UniqueName: \"kubernetes.io/projected/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-kube-api-access-gl4mn\") pod \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\" (UID: \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\") " Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.565993 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-catalog-content\") pod \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\" (UID: \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\") " Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.566031 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-utilities\") pod \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\" (UID: \"f02ca55c-cf49-48dd-bebc-3fd02f909b5d\") " Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.567492 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-utilities" (OuterVolumeSpecName: "utilities") pod "f02ca55c-cf49-48dd-bebc-3fd02f909b5d" (UID: "f02ca55c-cf49-48dd-bebc-3fd02f909b5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.572812 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-kube-api-access-gl4mn" (OuterVolumeSpecName: "kube-api-access-gl4mn") pod "f02ca55c-cf49-48dd-bebc-3fd02f909b5d" (UID: "f02ca55c-cf49-48dd-bebc-3fd02f909b5d"). InnerVolumeSpecName "kube-api-access-gl4mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.669354 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.669397 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl4mn\" (UniqueName: \"kubernetes.io/projected/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-kube-api-access-gl4mn\") on node \"crc\" DevicePath \"\"" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.687346 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f02ca55c-cf49-48dd-bebc-3fd02f909b5d" (UID: "f02ca55c-cf49-48dd-bebc-3fd02f909b5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.775230 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f02ca55c-cf49-48dd-bebc-3fd02f909b5d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.804085 4820 generic.go:334] "Generic (PLEG): container finished" podID="f02ca55c-cf49-48dd-bebc-3fd02f909b5d" containerID="cc37c85f9d176753817ce8e7487cba78912e3e4c308f80cc75e2c6497a6501e6" exitCode=0 Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.804162 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6dlr" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.804195 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6dlr" event={"ID":"f02ca55c-cf49-48dd-bebc-3fd02f909b5d","Type":"ContainerDied","Data":"cc37c85f9d176753817ce8e7487cba78912e3e4c308f80cc75e2c6497a6501e6"} Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.805378 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6dlr" event={"ID":"f02ca55c-cf49-48dd-bebc-3fd02f909b5d","Type":"ContainerDied","Data":"349778535a4b1a12b22f9f6a1996930100be08cb45f7984566d961315167cb18"} Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.805414 4820 scope.go:117] "RemoveContainer" containerID="cc37c85f9d176753817ce8e7487cba78912e3e4c308f80cc75e2c6497a6501e6" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.849001 4820 scope.go:117] "RemoveContainer" containerID="64783a82f0bed9cb721f53df6055f7e876d759a80a2bc1891f16f8844b80209a" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.851306 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d6dlr"] Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.865492 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d6dlr"] Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.876499 4820 scope.go:117] "RemoveContainer" containerID="9e1945eb3a9d8843098c6ce0680a5d497bc798d6ca472f894797b2d0bf7df74d" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.948041 4820 scope.go:117] "RemoveContainer" containerID="cc37c85f9d176753817ce8e7487cba78912e3e4c308f80cc75e2c6497a6501e6" Feb 03 13:33:46 crc kubenswrapper[4820]: E0203 13:33:46.948691 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc37c85f9d176753817ce8e7487cba78912e3e4c308f80cc75e2c6497a6501e6\": container with ID starting with cc37c85f9d176753817ce8e7487cba78912e3e4c308f80cc75e2c6497a6501e6 not found: ID does not exist" containerID="cc37c85f9d176753817ce8e7487cba78912e3e4c308f80cc75e2c6497a6501e6" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.948764 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc37c85f9d176753817ce8e7487cba78912e3e4c308f80cc75e2c6497a6501e6"} err="failed to get container status \"cc37c85f9d176753817ce8e7487cba78912e3e4c308f80cc75e2c6497a6501e6\": rpc error: code = NotFound desc = could not find container \"cc37c85f9d176753817ce8e7487cba78912e3e4c308f80cc75e2c6497a6501e6\": container with ID starting with cc37c85f9d176753817ce8e7487cba78912e3e4c308f80cc75e2c6497a6501e6 not found: ID does not exist" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.948804 4820 scope.go:117] "RemoveContainer" containerID="64783a82f0bed9cb721f53df6055f7e876d759a80a2bc1891f16f8844b80209a" Feb 03 13:33:46 crc kubenswrapper[4820]: E0203 13:33:46.949311 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64783a82f0bed9cb721f53df6055f7e876d759a80a2bc1891f16f8844b80209a\": container with ID starting with 64783a82f0bed9cb721f53df6055f7e876d759a80a2bc1891f16f8844b80209a not found: ID does not exist" containerID="64783a82f0bed9cb721f53df6055f7e876d759a80a2bc1891f16f8844b80209a" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.949348 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64783a82f0bed9cb721f53df6055f7e876d759a80a2bc1891f16f8844b80209a"} err="failed to get container status \"64783a82f0bed9cb721f53df6055f7e876d759a80a2bc1891f16f8844b80209a\": rpc error: code = NotFound desc = could not find container \"64783a82f0bed9cb721f53df6055f7e876d759a80a2bc1891f16f8844b80209a\": container with ID starting with 64783a82f0bed9cb721f53df6055f7e876d759a80a2bc1891f16f8844b80209a not found: ID does not exist" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.949370 4820 scope.go:117] "RemoveContainer" containerID="9e1945eb3a9d8843098c6ce0680a5d497bc798d6ca472f894797b2d0bf7df74d" Feb 03 13:33:46 crc kubenswrapper[4820]: E0203 13:33:46.949684 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e1945eb3a9d8843098c6ce0680a5d497bc798d6ca472f894797b2d0bf7df74d\": container with ID starting with 9e1945eb3a9d8843098c6ce0680a5d497bc798d6ca472f894797b2d0bf7df74d not found: ID does not exist" containerID="9e1945eb3a9d8843098c6ce0680a5d497bc798d6ca472f894797b2d0bf7df74d" Feb 03 13:33:46 crc kubenswrapper[4820]: I0203 13:33:46.949747 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e1945eb3a9d8843098c6ce0680a5d497bc798d6ca472f894797b2d0bf7df74d"} err="failed to get container status \"9e1945eb3a9d8843098c6ce0680a5d497bc798d6ca472f894797b2d0bf7df74d\": rpc error: code = NotFound desc = could not find container \"9e1945eb3a9d8843098c6ce0680a5d497bc798d6ca472f894797b2d0bf7df74d\": container with ID starting with 9e1945eb3a9d8843098c6ce0680a5d497bc798d6ca472f894797b2d0bf7df74d not found: ID does not exist" Feb 03 13:33:47 crc kubenswrapper[4820]: I0203 13:33:47.156107 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f02ca55c-cf49-48dd-bebc-3fd02f909b5d" path="/var/lib/kubelet/pods/f02ca55c-cf49-48dd-bebc-3fd02f909b5d/volumes" Feb 03 13:33:52 crc kubenswrapper[4820]: I0203 13:33:52.143822 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:33:52 crc kubenswrapper[4820]: E0203 13:33:52.144617 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:34:04 crc kubenswrapper[4820]: I0203 13:34:04.142823 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:34:04 crc kubenswrapper[4820]: E0203 13:34:04.143581 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:34:18 crc kubenswrapper[4820]: I0203 13:34:18.143488 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:34:18 crc kubenswrapper[4820]: E0203 13:34:18.144509 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:34:33 crc kubenswrapper[4820]: I0203 13:34:33.150849 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:34:33 crc kubenswrapper[4820]: E0203 13:34:33.163285 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:34:45 crc kubenswrapper[4820]: I0203 13:34:45.143147 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:34:45 crc kubenswrapper[4820]: E0203 13:34:45.144532 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:34:46 crc kubenswrapper[4820]: I0203 13:34:46.897135 4820 scope.go:117] "RemoveContainer" containerID="f4730bcf6b7fbe81d3fa984a5c7175e2e16850c836ee2172ec84138d9ccdda18" Feb 03 13:34:46 crc kubenswrapper[4820]: I0203 13:34:46.924469 4820 scope.go:117] "RemoveContainer" containerID="af3124c617ff50c26141ad090e3d1af075632f95046f50a7ba981399efb6bc72" Feb 03 13:34:46 crc kubenswrapper[4820]: I0203 13:34:46.985810 4820 scope.go:117] "RemoveContainer" containerID="7ff9d9117152a22c966420626d5254a37994262972e436bc95c49ca4be7167b0" Feb 03 13:34:57 crc kubenswrapper[4820]: I0203 13:34:57.143325 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:34:57 crc kubenswrapper[4820]: I0203 13:34:57.868052 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"96a8f441702c95476ee43f5c9515fcaa27b65c88d6d6e74c633083ade42e9416"} Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.385274 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jhbd4"] Feb 03 13:36:42 crc kubenswrapper[4820]: E0203 13:36:42.386461 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f02ca55c-cf49-48dd-bebc-3fd02f909b5d" containerName="registry-server" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.386485 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f02ca55c-cf49-48dd-bebc-3fd02f909b5d" containerName="registry-server" Feb 03 13:36:42 crc kubenswrapper[4820]: E0203 13:36:42.386516 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f02ca55c-cf49-48dd-bebc-3fd02f909b5d" containerName="extract-utilities" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.386522 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f02ca55c-cf49-48dd-bebc-3fd02f909b5d" containerName="extract-utilities" Feb 03 13:36:42 crc kubenswrapper[4820]: E0203 13:36:42.386543 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f02ca55c-cf49-48dd-bebc-3fd02f909b5d" containerName="extract-content" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.386549 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f02ca55c-cf49-48dd-bebc-3fd02f909b5d" containerName="extract-content" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.386779 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f02ca55c-cf49-48dd-bebc-3fd02f909b5d" containerName="registry-server" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.390685 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.405988 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhbd4"] Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.421597 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bldzv\" (UniqueName: \"kubernetes.io/projected/c97d1952-e7bb-4954-a876-f6f3155c1d8d-kube-api-access-bldzv\") pod \"certified-operators-jhbd4\" (UID: \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\") " pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.421929 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c97d1952-e7bb-4954-a876-f6f3155c1d8d-utilities\") pod \"certified-operators-jhbd4\" (UID: \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\") " pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.422007 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c97d1952-e7bb-4954-a876-f6f3155c1d8d-catalog-content\") pod \"certified-operators-jhbd4\" (UID: \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\") " pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.523917 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c97d1952-e7bb-4954-a876-f6f3155c1d8d-utilities\") pod \"certified-operators-jhbd4\" (UID: \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\") " pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.523987 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c97d1952-e7bb-4954-a876-f6f3155c1d8d-catalog-content\") pod \"certified-operators-jhbd4\" (UID: \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\") " pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.524041 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bldzv\" (UniqueName: \"kubernetes.io/projected/c97d1952-e7bb-4954-a876-f6f3155c1d8d-kube-api-access-bldzv\") pod \"certified-operators-jhbd4\" (UID: \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\") " pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.524474 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c97d1952-e7bb-4954-a876-f6f3155c1d8d-utilities\") pod \"certified-operators-jhbd4\" (UID: \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\") " pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.524532 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c97d1952-e7bb-4954-a876-f6f3155c1d8d-catalog-content\") pod \"certified-operators-jhbd4\" (UID: \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\") " pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.545129 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bldzv\" (UniqueName: \"kubernetes.io/projected/c97d1952-e7bb-4954-a876-f6f3155c1d8d-kube-api-access-bldzv\") pod \"certified-operators-jhbd4\" (UID: \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\") " pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:42 crc kubenswrapper[4820]: I0203 13:36:42.731488 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:43 crc kubenswrapper[4820]: I0203 13:36:43.321113 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhbd4"] Feb 03 13:36:44 crc kubenswrapper[4820]: I0203 13:36:44.204281 4820 generic.go:334] "Generic (PLEG): container finished" podID="c97d1952-e7bb-4954-a876-f6f3155c1d8d" containerID="9a0650e9acf1fba5e2955d9c02942d39163904a87c859145d46032d8f2295645" exitCode=0 Feb 03 13:36:44 crc kubenswrapper[4820]: I0203 13:36:44.204341 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhbd4" event={"ID":"c97d1952-e7bb-4954-a876-f6f3155c1d8d","Type":"ContainerDied","Data":"9a0650e9acf1fba5e2955d9c02942d39163904a87c859145d46032d8f2295645"} Feb 03 13:36:44 crc kubenswrapper[4820]: I0203 13:36:44.204777 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhbd4" event={"ID":"c97d1952-e7bb-4954-a876-f6f3155c1d8d","Type":"ContainerStarted","Data":"781caff6760e633c3233608f431260bb7157ff02c87679072e718743e98ba40b"} Feb 03 13:36:45 crc kubenswrapper[4820]: I0203 13:36:45.217844 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhbd4" event={"ID":"c97d1952-e7bb-4954-a876-f6f3155c1d8d","Type":"ContainerStarted","Data":"b65c38ab57bc65c8c339d7cf39307a44a09fe97c814b088b990a10e95ad90684"} Feb 03 13:36:47 crc kubenswrapper[4820]: I0203 13:36:47.238512 4820 generic.go:334] "Generic (PLEG): container finished" podID="c97d1952-e7bb-4954-a876-f6f3155c1d8d" containerID="b65c38ab57bc65c8c339d7cf39307a44a09fe97c814b088b990a10e95ad90684" exitCode=0 Feb 03 13:36:47 crc kubenswrapper[4820]: I0203 13:36:47.238820 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhbd4" event={"ID":"c97d1952-e7bb-4954-a876-f6f3155c1d8d","Type":"ContainerDied","Data":"b65c38ab57bc65c8c339d7cf39307a44a09fe97c814b088b990a10e95ad90684"} Feb 03 13:36:48 crc kubenswrapper[4820]: I0203 13:36:48.250781 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhbd4" event={"ID":"c97d1952-e7bb-4954-a876-f6f3155c1d8d","Type":"ContainerStarted","Data":"2b7d3a1d20985399431a09544619b23f1af52a23c9ac4c0834f62e803bc6f662"} Feb 03 13:36:48 crc kubenswrapper[4820]: I0203 13:36:48.273474 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jhbd4" podStartSLOduration=2.616458081 podStartE2EDuration="6.273440543s" podCreationTimestamp="2026-02-03 13:36:42 +0000 UTC" firstStartedPulling="2026-02-03 13:36:44.206270325 +0000 UTC m=+5521.729346189" lastFinishedPulling="2026-02-03 13:36:47.863252787 +0000 UTC m=+5525.386328651" observedRunningTime="2026-02-03 13:36:48.271665015 +0000 UTC m=+5525.794740889" watchObservedRunningTime="2026-02-03 13:36:48.273440543 +0000 UTC m=+5525.796516397" Feb 03 13:36:52 crc kubenswrapper[4820]: I0203 13:36:52.732615 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:52 crc kubenswrapper[4820]: I0203 13:36:52.734321 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:52 crc kubenswrapper[4820]: I0203 13:36:52.882335 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:53 crc kubenswrapper[4820]: I0203 13:36:53.352513 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:53 crc kubenswrapper[4820]: I0203 13:36:53.415214 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jhbd4"] Feb 03 13:36:55 crc kubenswrapper[4820]: I0203 13:36:55.503719 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jhbd4" podUID="c97d1952-e7bb-4954-a876-f6f3155c1d8d" containerName="registry-server" containerID="cri-o://2b7d3a1d20985399431a09544619b23f1af52a23c9ac4c0834f62e803bc6f662" gracePeriod=2 Feb 03 13:36:56 crc kubenswrapper[4820]: I0203 13:36:56.560006 4820 generic.go:334] "Generic (PLEG): container finished" podID="c97d1952-e7bb-4954-a876-f6f3155c1d8d" containerID="2b7d3a1d20985399431a09544619b23f1af52a23c9ac4c0834f62e803bc6f662" exitCode=0 Feb 03 13:36:56 crc kubenswrapper[4820]: I0203 13:36:56.560071 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhbd4" event={"ID":"c97d1952-e7bb-4954-a876-f6f3155c1d8d","Type":"ContainerDied","Data":"2b7d3a1d20985399431a09544619b23f1af52a23c9ac4c0834f62e803bc6f662"} Feb 03 13:36:56 crc kubenswrapper[4820]: I0203 13:36:56.712324 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:56 crc kubenswrapper[4820]: I0203 13:36:56.838732 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c97d1952-e7bb-4954-a876-f6f3155c1d8d-utilities\") pod \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\" (UID: \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\") " Feb 03 13:36:56 crc kubenswrapper[4820]: I0203 13:36:56.838827 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c97d1952-e7bb-4954-a876-f6f3155c1d8d-catalog-content\") pod \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\" (UID: \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\") " Feb 03 13:36:56 crc kubenswrapper[4820]: I0203 13:36:56.838998 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bldzv\" (UniqueName: \"kubernetes.io/projected/c97d1952-e7bb-4954-a876-f6f3155c1d8d-kube-api-access-bldzv\") pod \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\" (UID: \"c97d1952-e7bb-4954-a876-f6f3155c1d8d\") " Feb 03 13:36:56 crc kubenswrapper[4820]: I0203 13:36:56.839981 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c97d1952-e7bb-4954-a876-f6f3155c1d8d-utilities" (OuterVolumeSpecName: "utilities") pod "c97d1952-e7bb-4954-a876-f6f3155c1d8d" (UID: "c97d1952-e7bb-4954-a876-f6f3155c1d8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:36:56 crc kubenswrapper[4820]: I0203 13:36:56.861463 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c97d1952-e7bb-4954-a876-f6f3155c1d8d-kube-api-access-bldzv" (OuterVolumeSpecName: "kube-api-access-bldzv") pod "c97d1952-e7bb-4954-a876-f6f3155c1d8d" (UID: "c97d1952-e7bb-4954-a876-f6f3155c1d8d"). InnerVolumeSpecName "kube-api-access-bldzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:36:56 crc kubenswrapper[4820]: I0203 13:36:56.919395 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c97d1952-e7bb-4954-a876-f6f3155c1d8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c97d1952-e7bb-4954-a876-f6f3155c1d8d" (UID: "c97d1952-e7bb-4954-a876-f6f3155c1d8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:36:56 crc kubenswrapper[4820]: I0203 13:36:56.941618 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bldzv\" (UniqueName: \"kubernetes.io/projected/c97d1952-e7bb-4954-a876-f6f3155c1d8d-kube-api-access-bldzv\") on node \"crc\" DevicePath \"\"" Feb 03 13:36:56 crc kubenswrapper[4820]: I0203 13:36:56.941656 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c97d1952-e7bb-4954-a876-f6f3155c1d8d-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:36:56 crc kubenswrapper[4820]: I0203 13:36:56.941666 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c97d1952-e7bb-4954-a876-f6f3155c1d8d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:36:57 crc kubenswrapper[4820]: I0203 13:36:57.572443 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhbd4" event={"ID":"c97d1952-e7bb-4954-a876-f6f3155c1d8d","Type":"ContainerDied","Data":"781caff6760e633c3233608f431260bb7157ff02c87679072e718743e98ba40b"} Feb 03 13:36:57 crc kubenswrapper[4820]: I0203 13:36:57.572513 4820 scope.go:117] "RemoveContainer" containerID="2b7d3a1d20985399431a09544619b23f1af52a23c9ac4c0834f62e803bc6f662" Feb 03 13:36:57 crc kubenswrapper[4820]: I0203 13:36:57.572524 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhbd4" Feb 03 13:36:57 crc kubenswrapper[4820]: I0203 13:36:57.596788 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jhbd4"] Feb 03 13:36:57 crc kubenswrapper[4820]: I0203 13:36:57.604953 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jhbd4"] Feb 03 13:36:57 crc kubenswrapper[4820]: I0203 13:36:57.605698 4820 scope.go:117] "RemoveContainer" containerID="b65c38ab57bc65c8c339d7cf39307a44a09fe97c814b088b990a10e95ad90684" Feb 03 13:36:57 crc kubenswrapper[4820]: I0203 13:36:57.630742 4820 scope.go:117] "RemoveContainer" containerID="9a0650e9acf1fba5e2955d9c02942d39163904a87c859145d46032d8f2295645" Feb 03 13:36:59 crc kubenswrapper[4820]: I0203 13:36:59.168699 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c97d1952-e7bb-4954-a876-f6f3155c1d8d" path="/var/lib/kubelet/pods/c97d1952-e7bb-4954-a876-f6f3155c1d8d/volumes" Feb 03 13:37:01 crc kubenswrapper[4820]: I0203 13:37:01.365626 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:37:01 crc kubenswrapper[4820]: I0203 13:37:01.365968 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:37:31 crc kubenswrapper[4820]: I0203 13:37:31.365996 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:37:31 crc kubenswrapper[4820]: I0203 13:37:31.366562 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:38:01 crc kubenswrapper[4820]: I0203 13:38:01.365185 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:38:01 crc kubenswrapper[4820]: I0203 13:38:01.365783 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:38:01 crc kubenswrapper[4820]: I0203 13:38:01.365846 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 13:38:01 crc kubenswrapper[4820]: I0203 13:38:01.366860 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96a8f441702c95476ee43f5c9515fcaa27b65c88d6d6e74c633083ade42e9416"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:38:01 crc kubenswrapper[4820]: I0203 13:38:01.366945 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://96a8f441702c95476ee43f5c9515fcaa27b65c88d6d6e74c633083ade42e9416" gracePeriod=600 Feb 03 13:38:01 crc kubenswrapper[4820]: I0203 13:38:01.570982 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="96a8f441702c95476ee43f5c9515fcaa27b65c88d6d6e74c633083ade42e9416" exitCode=0 Feb 03 13:38:01 crc kubenswrapper[4820]: I0203 13:38:01.571041 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"96a8f441702c95476ee43f5c9515fcaa27b65c88d6d6e74c633083ade42e9416"} Feb 03 13:38:01 crc kubenswrapper[4820]: I0203 13:38:01.571099 4820 scope.go:117] "RemoveContainer" containerID="29e62cbb021fe96e31a97c5a8fed00638c88a00cae134f18c7cf36231623d398" Feb 03 13:38:02 crc kubenswrapper[4820]: I0203 13:38:02.587774 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a"} Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.139062 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ndfmg"] Feb 03 13:39:45 crc kubenswrapper[4820]: E0203 13:39:45.141506 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97d1952-e7bb-4954-a876-f6f3155c1d8d" containerName="extract-utilities" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.141551 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97d1952-e7bb-4954-a876-f6f3155c1d8d" containerName="extract-utilities" Feb 03 13:39:45 crc kubenswrapper[4820]: E0203 13:39:45.141575 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97d1952-e7bb-4954-a876-f6f3155c1d8d" containerName="extract-content" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.141583 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97d1952-e7bb-4954-a876-f6f3155c1d8d" containerName="extract-content" Feb 03 13:39:45 crc kubenswrapper[4820]: E0203 13:39:45.141622 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97d1952-e7bb-4954-a876-f6f3155c1d8d" containerName="registry-server" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.141629 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97d1952-e7bb-4954-a876-f6f3155c1d8d" containerName="registry-server" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.141840 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97d1952-e7bb-4954-a876-f6f3155c1d8d" containerName="registry-server" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.143630 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.192479 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ndfmg"] Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.248133 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1b8fb20-9657-46ba-867f-e39fb6d21be6-utilities\") pod \"community-operators-ndfmg\" (UID: \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\") " pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.248445 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x75n6\" (UniqueName: \"kubernetes.io/projected/d1b8fb20-9657-46ba-867f-e39fb6d21be6-kube-api-access-x75n6\") pod \"community-operators-ndfmg\" (UID: \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\") " pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.248509 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1b8fb20-9657-46ba-867f-e39fb6d21be6-catalog-content\") pod \"community-operators-ndfmg\" (UID: \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\") " pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.351379 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x75n6\" (UniqueName: \"kubernetes.io/projected/d1b8fb20-9657-46ba-867f-e39fb6d21be6-kube-api-access-x75n6\") pod \"community-operators-ndfmg\" (UID: \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\") " pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.351461 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1b8fb20-9657-46ba-867f-e39fb6d21be6-catalog-content\") pod \"community-operators-ndfmg\" (UID: \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\") " pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.351586 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1b8fb20-9657-46ba-867f-e39fb6d21be6-utilities\") pod \"community-operators-ndfmg\" (UID: \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\") " pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.354167 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1b8fb20-9657-46ba-867f-e39fb6d21be6-utilities\") pod \"community-operators-ndfmg\" (UID: \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\") " pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.354727 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1b8fb20-9657-46ba-867f-e39fb6d21be6-catalog-content\") pod \"community-operators-ndfmg\" (UID: \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\") " pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.381125 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x75n6\" (UniqueName: \"kubernetes.io/projected/d1b8fb20-9657-46ba-867f-e39fb6d21be6-kube-api-access-x75n6\") pod \"community-operators-ndfmg\" (UID: \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\") " pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:45 crc kubenswrapper[4820]: I0203 13:39:45.469228 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:46 crc kubenswrapper[4820]: I0203 13:39:46.087351 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ndfmg"] Feb 03 13:39:46 crc kubenswrapper[4820]: I0203 13:39:46.986865 4820 generic.go:334] "Generic (PLEG): container finished" podID="d1b8fb20-9657-46ba-867f-e39fb6d21be6" containerID="6436fc9840ce11297834a366edb990b46b421054d370f8841c3e21cce10b3152" exitCode=0 Feb 03 13:39:46 crc kubenswrapper[4820]: I0203 13:39:46.986940 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndfmg" event={"ID":"d1b8fb20-9657-46ba-867f-e39fb6d21be6","Type":"ContainerDied","Data":"6436fc9840ce11297834a366edb990b46b421054d370f8841c3e21cce10b3152"} Feb 03 13:39:46 crc kubenswrapper[4820]: I0203 13:39:46.987242 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndfmg" event={"ID":"d1b8fb20-9657-46ba-867f-e39fb6d21be6","Type":"ContainerStarted","Data":"941d150fc6f9dcf4145beeafe0cb175284e6787db44a657c9943b2b7a515e57c"} Feb 03 13:39:46 crc kubenswrapper[4820]: I0203 13:39:46.991292 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:39:49 crc kubenswrapper[4820]: I0203 13:39:49.009442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndfmg" event={"ID":"d1b8fb20-9657-46ba-867f-e39fb6d21be6","Type":"ContainerStarted","Data":"92b3106e9f434879d0b06859f8f2d7e22f4049f2f84be45aaf653a64d6000783"} Feb 03 13:39:50 crc kubenswrapper[4820]: I0203 13:39:50.023949 4820 generic.go:334] "Generic (PLEG): container finished" podID="d1b8fb20-9657-46ba-867f-e39fb6d21be6" containerID="92b3106e9f434879d0b06859f8f2d7e22f4049f2f84be45aaf653a64d6000783" exitCode=0 Feb 03 13:39:50 crc kubenswrapper[4820]: I0203 13:39:50.024054 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndfmg" event={"ID":"d1b8fb20-9657-46ba-867f-e39fb6d21be6","Type":"ContainerDied","Data":"92b3106e9f434879d0b06859f8f2d7e22f4049f2f84be45aaf653a64d6000783"} Feb 03 13:39:51 crc kubenswrapper[4820]: I0203 13:39:51.035224 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndfmg" event={"ID":"d1b8fb20-9657-46ba-867f-e39fb6d21be6","Type":"ContainerStarted","Data":"b1f776b9bb2788dcd1c2025c16b42e25dbdbcda70ec8b1a6bc90b3d22f5ce117"} Feb 03 13:39:51 crc kubenswrapper[4820]: I0203 13:39:51.059104 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ndfmg" podStartSLOduration=2.524939098 podStartE2EDuration="6.059071273s" podCreationTimestamp="2026-02-03 13:39:45 +0000 UTC" firstStartedPulling="2026-02-03 13:39:46.990856208 +0000 UTC m=+5704.513932072" lastFinishedPulling="2026-02-03 13:39:50.524988383 +0000 UTC m=+5708.048064247" observedRunningTime="2026-02-03 13:39:51.055007635 +0000 UTC m=+5708.578083499" watchObservedRunningTime="2026-02-03 13:39:51.059071273 +0000 UTC m=+5708.582147147" Feb 03 13:39:55 crc kubenswrapper[4820]: I0203 13:39:55.469562 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:55 crc kubenswrapper[4820]: I0203 13:39:55.469966 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:55 crc kubenswrapper[4820]: I0203 13:39:55.561246 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:56 crc kubenswrapper[4820]: I0203 13:39:56.141262 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:56 crc kubenswrapper[4820]: I0203 13:39:56.210057 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ndfmg"] Feb 03 13:39:58 crc kubenswrapper[4820]: I0203 13:39:58.115998 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ndfmg" podUID="d1b8fb20-9657-46ba-867f-e39fb6d21be6" containerName="registry-server" containerID="cri-o://b1f776b9bb2788dcd1c2025c16b42e25dbdbcda70ec8b1a6bc90b3d22f5ce117" gracePeriod=2 Feb 03 13:39:58 crc kubenswrapper[4820]: I0203 13:39:58.669132 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:58 crc kubenswrapper[4820]: I0203 13:39:58.840256 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1b8fb20-9657-46ba-867f-e39fb6d21be6-utilities\") pod \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\" (UID: \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\") " Feb 03 13:39:58 crc kubenswrapper[4820]: I0203 13:39:58.840398 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1b8fb20-9657-46ba-867f-e39fb6d21be6-catalog-content\") pod \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\" (UID: \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\") " Feb 03 13:39:58 crc kubenswrapper[4820]: I0203 13:39:58.840489 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x75n6\" (UniqueName: \"kubernetes.io/projected/d1b8fb20-9657-46ba-867f-e39fb6d21be6-kube-api-access-x75n6\") pod \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\" (UID: \"d1b8fb20-9657-46ba-867f-e39fb6d21be6\") " Feb 03 13:39:58 crc kubenswrapper[4820]: I0203 13:39:58.841454 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1b8fb20-9657-46ba-867f-e39fb6d21be6-utilities" (OuterVolumeSpecName: "utilities") pod "d1b8fb20-9657-46ba-867f-e39fb6d21be6" (UID: "d1b8fb20-9657-46ba-867f-e39fb6d21be6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:39:58 crc kubenswrapper[4820]: I0203 13:39:58.847360 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b8fb20-9657-46ba-867f-e39fb6d21be6-kube-api-access-x75n6" (OuterVolumeSpecName: "kube-api-access-x75n6") pod "d1b8fb20-9657-46ba-867f-e39fb6d21be6" (UID: "d1b8fb20-9657-46ba-867f-e39fb6d21be6"). InnerVolumeSpecName "kube-api-access-x75n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:39:58 crc kubenswrapper[4820]: I0203 13:39:58.943553 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x75n6\" (UniqueName: \"kubernetes.io/projected/d1b8fb20-9657-46ba-867f-e39fb6d21be6-kube-api-access-x75n6\") on node \"crc\" DevicePath \"\"" Feb 03 13:39:58 crc kubenswrapper[4820]: I0203 13:39:58.943596 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1b8fb20-9657-46ba-867f-e39fb6d21be6-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.130227 4820 generic.go:334] "Generic (PLEG): container finished" podID="d1b8fb20-9657-46ba-867f-e39fb6d21be6" containerID="b1f776b9bb2788dcd1c2025c16b42e25dbdbcda70ec8b1a6bc90b3d22f5ce117" exitCode=0 Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.130287 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndfmg" event={"ID":"d1b8fb20-9657-46ba-867f-e39fb6d21be6","Type":"ContainerDied","Data":"b1f776b9bb2788dcd1c2025c16b42e25dbdbcda70ec8b1a6bc90b3d22f5ce117"} Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.130322 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndfmg" event={"ID":"d1b8fb20-9657-46ba-867f-e39fb6d21be6","Type":"ContainerDied","Data":"941d150fc6f9dcf4145beeafe0cb175284e6787db44a657c9943b2b7a515e57c"} Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.130342 4820 scope.go:117] "RemoveContainer" containerID="b1f776b9bb2788dcd1c2025c16b42e25dbdbcda70ec8b1a6bc90b3d22f5ce117" Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.131727 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ndfmg" Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.167762 4820 scope.go:117] "RemoveContainer" containerID="92b3106e9f434879d0b06859f8f2d7e22f4049f2f84be45aaf653a64d6000783" Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.198636 4820 scope.go:117] "RemoveContainer" containerID="6436fc9840ce11297834a366edb990b46b421054d370f8841c3e21cce10b3152" Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.262833 4820 scope.go:117] "RemoveContainer" containerID="b1f776b9bb2788dcd1c2025c16b42e25dbdbcda70ec8b1a6bc90b3d22f5ce117" Feb 03 13:39:59 crc kubenswrapper[4820]: E0203 13:39:59.264991 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1f776b9bb2788dcd1c2025c16b42e25dbdbcda70ec8b1a6bc90b3d22f5ce117\": container with ID starting with b1f776b9bb2788dcd1c2025c16b42e25dbdbcda70ec8b1a6bc90b3d22f5ce117 not found: ID does not exist" containerID="b1f776b9bb2788dcd1c2025c16b42e25dbdbcda70ec8b1a6bc90b3d22f5ce117" Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.265095 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1f776b9bb2788dcd1c2025c16b42e25dbdbcda70ec8b1a6bc90b3d22f5ce117"} err="failed to get container status \"b1f776b9bb2788dcd1c2025c16b42e25dbdbcda70ec8b1a6bc90b3d22f5ce117\": rpc error: code = NotFound desc = could not find container \"b1f776b9bb2788dcd1c2025c16b42e25dbdbcda70ec8b1a6bc90b3d22f5ce117\": container with ID starting with b1f776b9bb2788dcd1c2025c16b42e25dbdbcda70ec8b1a6bc90b3d22f5ce117 not found: ID does not exist" Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.265155 4820 scope.go:117] "RemoveContainer" containerID="92b3106e9f434879d0b06859f8f2d7e22f4049f2f84be45aaf653a64d6000783" Feb 03 13:39:59 crc kubenswrapper[4820]: E0203 13:39:59.265798 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b3106e9f434879d0b06859f8f2d7e22f4049f2f84be45aaf653a64d6000783\": container with ID starting with 92b3106e9f434879d0b06859f8f2d7e22f4049f2f84be45aaf653a64d6000783 not found: ID does not exist" containerID="92b3106e9f434879d0b06859f8f2d7e22f4049f2f84be45aaf653a64d6000783" Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.265836 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b3106e9f434879d0b06859f8f2d7e22f4049f2f84be45aaf653a64d6000783"} err="failed to get container status \"92b3106e9f434879d0b06859f8f2d7e22f4049f2f84be45aaf653a64d6000783\": rpc error: code = NotFound desc = could not find container \"92b3106e9f434879d0b06859f8f2d7e22f4049f2f84be45aaf653a64d6000783\": container with ID starting with 92b3106e9f434879d0b06859f8f2d7e22f4049f2f84be45aaf653a64d6000783 not found: ID does not exist" Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.265859 4820 scope.go:117] "RemoveContainer" containerID="6436fc9840ce11297834a366edb990b46b421054d370f8841c3e21cce10b3152" Feb 03 13:39:59 crc kubenswrapper[4820]: E0203 13:39:59.268760 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6436fc9840ce11297834a366edb990b46b421054d370f8841c3e21cce10b3152\": container with ID starting with 6436fc9840ce11297834a366edb990b46b421054d370f8841c3e21cce10b3152 not found: ID does not exist" containerID="6436fc9840ce11297834a366edb990b46b421054d370f8841c3e21cce10b3152" Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.268844 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6436fc9840ce11297834a366edb990b46b421054d370f8841c3e21cce10b3152"} err="failed to get container status \"6436fc9840ce11297834a366edb990b46b421054d370f8841c3e21cce10b3152\": rpc error: code = NotFound desc = could not find container \"6436fc9840ce11297834a366edb990b46b421054d370f8841c3e21cce10b3152\": container with ID starting with 6436fc9840ce11297834a366edb990b46b421054d370f8841c3e21cce10b3152 not found: ID does not exist" Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.643697 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1b8fb20-9657-46ba-867f-e39fb6d21be6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1b8fb20-9657-46ba-867f-e39fb6d21be6" (UID: "d1b8fb20-9657-46ba-867f-e39fb6d21be6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.674878 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1b8fb20-9657-46ba-867f-e39fb6d21be6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.789487 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ndfmg"] Feb 03 13:39:59 crc kubenswrapper[4820]: I0203 13:39:59.799070 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ndfmg"] Feb 03 13:40:01 crc kubenswrapper[4820]: I0203 13:40:01.309630 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b8fb20-9657-46ba-867f-e39fb6d21be6" path="/var/lib/kubelet/pods/d1b8fb20-9657-46ba-867f-e39fb6d21be6/volumes" Feb 03 13:40:31 crc kubenswrapper[4820]: I0203 13:40:31.365414 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:40:31 crc kubenswrapper[4820]: I0203 13:40:31.365960 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:40:33 crc kubenswrapper[4820]: I0203 13:40:33.887804 4820 generic.go:334] "Generic (PLEG): container finished" podID="a52d7dcc-1107-47d1-b270-0601e9dc2b1b" containerID="06c43d26a46f211d8df4b5f1113886b401332ce6aa4cc388dd3f4ae0154ab738" exitCode=0 Feb 03 13:40:33 crc kubenswrapper[4820]: I0203 13:40:33.889630 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a52d7dcc-1107-47d1-b270-0601e9dc2b1b","Type":"ContainerDied","Data":"06c43d26a46f211d8df4b5f1113886b401332ce6aa4cc388dd3f4ae0154ab738"} Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.323600 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.329781 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-ca-certs\") pod \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.329866 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-ssh-key\") pod \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.329946 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.329980 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f4wq\" (UniqueName: \"kubernetes.io/projected/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-kube-api-access-2f4wq\") pod \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.330015 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-openstack-config-secret\") pod \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.330069 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-test-operator-ephemeral-workdir\") pod \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.330103 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-config-data\") pod \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.330156 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-test-operator-ephemeral-temporary\") pod \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.330193 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-openstack-config\") pod \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\" (UID: \"a52d7dcc-1107-47d1-b270-0601e9dc2b1b\") " Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.336968 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-kube-api-access-2f4wq" (OuterVolumeSpecName: "kube-api-access-2f4wq") pod "a52d7dcc-1107-47d1-b270-0601e9dc2b1b" (UID: "a52d7dcc-1107-47d1-b270-0601e9dc2b1b"). InnerVolumeSpecName "kube-api-access-2f4wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.338860 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a52d7dcc-1107-47d1-b270-0601e9dc2b1b" (UID: "a52d7dcc-1107-47d1-b270-0601e9dc2b1b"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.339252 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-config-data" (OuterVolumeSpecName: "config-data") pod "a52d7dcc-1107-47d1-b270-0601e9dc2b1b" (UID: "a52d7dcc-1107-47d1-b270-0601e9dc2b1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.344070 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a52d7dcc-1107-47d1-b270-0601e9dc2b1b" (UID: "a52d7dcc-1107-47d1-b270-0601e9dc2b1b"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.381597 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a52d7dcc-1107-47d1-b270-0601e9dc2b1b" (UID: "a52d7dcc-1107-47d1-b270-0601e9dc2b1b"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.402019 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a52d7dcc-1107-47d1-b270-0601e9dc2b1b" (UID: "a52d7dcc-1107-47d1-b270-0601e9dc2b1b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.606193 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.610077 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f4wq\" (UniqueName: \"kubernetes.io/projected/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-kube-api-access-2f4wq\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.610115 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.610126 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.610164 4820 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.610175 4820 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.616509 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a52d7dcc-1107-47d1-b270-0601e9dc2b1b" (UID: "a52d7dcc-1107-47d1-b270-0601e9dc2b1b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.621179 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a52d7dcc-1107-47d1-b270-0601e9dc2b1b" (UID: "a52d7dcc-1107-47d1-b270-0601e9dc2b1b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.621201 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a52d7dcc-1107-47d1-b270-0601e9dc2b1b" (UID: "a52d7dcc-1107-47d1-b270-0601e9dc2b1b"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.631645 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.712081 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.712129 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.712147 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.712161 4820 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a52d7dcc-1107-47d1-b270-0601e9dc2b1b-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.912207 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a52d7dcc-1107-47d1-b270-0601e9dc2b1b","Type":"ContainerDied","Data":"d44e57f0d0a177d690dceb39812d340ad88e959cc7a762ac253bb5230e006d7a"} Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.912259 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44e57f0d0a177d690dceb39812d340ad88e959cc7a762ac253bb5230e006d7a" Feb 03 13:40:35 crc kubenswrapper[4820]: I0203 13:40:35.912324 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.107623 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 13:40:38 crc kubenswrapper[4820]: E0203 13:40:38.108765 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b8fb20-9657-46ba-867f-e39fb6d21be6" containerName="extract-utilities" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.108785 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b8fb20-9657-46ba-867f-e39fb6d21be6" containerName="extract-utilities" Feb 03 13:40:38 crc kubenswrapper[4820]: E0203 13:40:38.108812 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b8fb20-9657-46ba-867f-e39fb6d21be6" containerName="extract-content" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.108821 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b8fb20-9657-46ba-867f-e39fb6d21be6" containerName="extract-content" Feb 03 13:40:38 crc kubenswrapper[4820]: E0203 13:40:38.108843 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52d7dcc-1107-47d1-b270-0601e9dc2b1b" containerName="tempest-tests-tempest-tests-runner" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.108851 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52d7dcc-1107-47d1-b270-0601e9dc2b1b" containerName="tempest-tests-tempest-tests-runner" Feb 03 13:40:38 crc kubenswrapper[4820]: E0203 13:40:38.108877 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b8fb20-9657-46ba-867f-e39fb6d21be6" containerName="registry-server" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.108902 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b8fb20-9657-46ba-867f-e39fb6d21be6" containerName="registry-server" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.109270 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52d7dcc-1107-47d1-b270-0601e9dc2b1b" containerName="tempest-tests-tempest-tests-runner" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.109304 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b8fb20-9657-46ba-867f-e39fb6d21be6" containerName="registry-server" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.110479 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.115199 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-brtb9" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.121075 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.281952 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9f3d9ead-7790-4cbb-a70c-51aa29d87eef\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.282324 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbwd7\" (UniqueName: \"kubernetes.io/projected/9f3d9ead-7790-4cbb-a70c-51aa29d87eef-kube-api-access-sbwd7\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9f3d9ead-7790-4cbb-a70c-51aa29d87eef\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.384774 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9f3d9ead-7790-4cbb-a70c-51aa29d87eef\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.385260 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbwd7\" (UniqueName: \"kubernetes.io/projected/9f3d9ead-7790-4cbb-a70c-51aa29d87eef-kube-api-access-sbwd7\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9f3d9ead-7790-4cbb-a70c-51aa29d87eef\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.402596 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9f3d9ead-7790-4cbb-a70c-51aa29d87eef\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.427574 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbwd7\" (UniqueName: \"kubernetes.io/projected/9f3d9ead-7790-4cbb-a70c-51aa29d87eef-kube-api-access-sbwd7\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9f3d9ead-7790-4cbb-a70c-51aa29d87eef\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.446592 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9f3d9ead-7790-4cbb-a70c-51aa29d87eef\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:40:38 crc kubenswrapper[4820]: I0203 13:40:38.736302 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:40:39 crc kubenswrapper[4820]: I0203 13:40:39.268410 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 13:40:39 crc kubenswrapper[4820]: W0203 13:40:39.278318 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f3d9ead_7790_4cbb_a70c_51aa29d87eef.slice/crio-7a9e846b8be8bc507517b23aefd774237e68b768824f4136de136a058e43bf58 WatchSource:0}: Error finding container 7a9e846b8be8bc507517b23aefd774237e68b768824f4136de136a058e43bf58: Status 404 returned error can't find the container with id 7a9e846b8be8bc507517b23aefd774237e68b768824f4136de136a058e43bf58 Feb 03 13:40:40 crc kubenswrapper[4820]: I0203 13:40:40.013497 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9f3d9ead-7790-4cbb-a70c-51aa29d87eef","Type":"ContainerStarted","Data":"7a9e846b8be8bc507517b23aefd774237e68b768824f4136de136a058e43bf58"} Feb 03 13:40:41 crc kubenswrapper[4820]: I0203 13:40:41.198808 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9f3d9ead-7790-4cbb-a70c-51aa29d87eef","Type":"ContainerStarted","Data":"fed0767cfb364a14258980e30881a4cea1bebc0d663ce04215494ab4a5b83861"} Feb 03 13:41:01 crc kubenswrapper[4820]: I0203 13:41:01.365176 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:41:01 crc kubenswrapper[4820]: I0203 13:41:01.365824 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:41:02 crc kubenswrapper[4820]: I0203 13:41:02.816552 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=23.869154401 podStartE2EDuration="24.816517907s" podCreationTimestamp="2026-02-03 13:40:38 +0000 UTC" firstStartedPulling="2026-02-03 13:40:39.281973866 +0000 UTC m=+5756.805049760" lastFinishedPulling="2026-02-03 13:40:40.229337402 +0000 UTC m=+5757.752413266" observedRunningTime="2026-02-03 13:40:41.209281985 +0000 UTC m=+5758.732357849" watchObservedRunningTime="2026-02-03 13:41:02.816517907 +0000 UTC m=+5780.339593771" Feb 03 13:41:02 crc kubenswrapper[4820]: I0203 13:41:02.818540 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gr579/must-gather-jbg9l"] Feb 03 13:41:02 crc kubenswrapper[4820]: I0203 13:41:02.820375 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/must-gather-jbg9l" Feb 03 13:41:02 crc kubenswrapper[4820]: I0203 13:41:02.826537 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gr579"/"openshift-service-ca.crt" Feb 03 13:41:02 crc kubenswrapper[4820]: I0203 13:41:02.826733 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gr579"/"kube-root-ca.crt" Feb 03 13:41:02 crc kubenswrapper[4820]: I0203 13:41:02.890319 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gr579/must-gather-jbg9l"] Feb 03 13:41:03 crc kubenswrapper[4820]: I0203 13:41:03.005662 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt4q8\" (UniqueName: \"kubernetes.io/projected/f283b2fc-d781-41fe-a1c4-c5292263d7d6-kube-api-access-nt4q8\") pod \"must-gather-jbg9l\" (UID: \"f283b2fc-d781-41fe-a1c4-c5292263d7d6\") " pod="openshift-must-gather-gr579/must-gather-jbg9l" Feb 03 13:41:03 crc kubenswrapper[4820]: I0203 13:41:03.006259 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f283b2fc-d781-41fe-a1c4-c5292263d7d6-must-gather-output\") pod \"must-gather-jbg9l\" (UID: \"f283b2fc-d781-41fe-a1c4-c5292263d7d6\") " pod="openshift-must-gather-gr579/must-gather-jbg9l" Feb 03 13:41:03 crc kubenswrapper[4820]: I0203 13:41:03.108205 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f283b2fc-d781-41fe-a1c4-c5292263d7d6-must-gather-output\") pod \"must-gather-jbg9l\" (UID: \"f283b2fc-d781-41fe-a1c4-c5292263d7d6\") " pod="openshift-must-gather-gr579/must-gather-jbg9l" Feb 03 13:41:03 crc kubenswrapper[4820]: I0203 13:41:03.108309 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt4q8\" (UniqueName: \"kubernetes.io/projected/f283b2fc-d781-41fe-a1c4-c5292263d7d6-kube-api-access-nt4q8\") pod \"must-gather-jbg9l\" (UID: \"f283b2fc-d781-41fe-a1c4-c5292263d7d6\") " pod="openshift-must-gather-gr579/must-gather-jbg9l" Feb 03 13:41:03 crc kubenswrapper[4820]: I0203 13:41:03.108861 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f283b2fc-d781-41fe-a1c4-c5292263d7d6-must-gather-output\") pod \"must-gather-jbg9l\" (UID: \"f283b2fc-d781-41fe-a1c4-c5292263d7d6\") " pod="openshift-must-gather-gr579/must-gather-jbg9l" Feb 03 13:41:03 crc kubenswrapper[4820]: I0203 13:41:03.130118 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt4q8\" (UniqueName: \"kubernetes.io/projected/f283b2fc-d781-41fe-a1c4-c5292263d7d6-kube-api-access-nt4q8\") pod \"must-gather-jbg9l\" (UID: \"f283b2fc-d781-41fe-a1c4-c5292263d7d6\") " pod="openshift-must-gather-gr579/must-gather-jbg9l" Feb 03 13:41:03 crc kubenswrapper[4820]: I0203 13:41:03.165137 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/must-gather-jbg9l" Feb 03 13:41:03 crc kubenswrapper[4820]: I0203 13:41:03.693988 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gr579/must-gather-jbg9l"] Feb 03 13:41:03 crc kubenswrapper[4820]: W0203 13:41:03.701371 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf283b2fc_d781_41fe_a1c4_c5292263d7d6.slice/crio-c0af3af0f70bab017924bee961c0999a96c6452752bc77f372739abd359b5661 WatchSource:0}: Error finding container c0af3af0f70bab017924bee961c0999a96c6452752bc77f372739abd359b5661: Status 404 returned error can't find the container with id c0af3af0f70bab017924bee961c0999a96c6452752bc77f372739abd359b5661 Feb 03 13:41:04 crc kubenswrapper[4820]: I0203 13:41:04.482867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr579/must-gather-jbg9l" event={"ID":"f283b2fc-d781-41fe-a1c4-c5292263d7d6","Type":"ContainerStarted","Data":"c0af3af0f70bab017924bee961c0999a96c6452752bc77f372739abd359b5661"} Feb 03 13:41:08 crc kubenswrapper[4820]: I0203 13:41:08.536560 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr579/must-gather-jbg9l" event={"ID":"f283b2fc-d781-41fe-a1c4-c5292263d7d6","Type":"ContainerStarted","Data":"4a995ac8d34acdc3c75578cb20000ead2643c11a46b2c951310ba3c7ecf412a5"} Feb 03 13:41:09 crc kubenswrapper[4820]: I0203 13:41:09.547416 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr579/must-gather-jbg9l" event={"ID":"f283b2fc-d781-41fe-a1c4-c5292263d7d6","Type":"ContainerStarted","Data":"2056709d07e93f5108e412723dafb8578771088e2eae937a46eb87961321fb0e"} Feb 03 13:41:09 crc kubenswrapper[4820]: I0203 13:41:09.566754 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gr579/must-gather-jbg9l" podStartSLOduration=3.258833659 podStartE2EDuration="7.566737716s" podCreationTimestamp="2026-02-03 13:41:02 +0000 UTC" firstStartedPulling="2026-02-03 13:41:03.704250076 +0000 UTC m=+5781.227325940" lastFinishedPulling="2026-02-03 13:41:08.012154093 +0000 UTC m=+5785.535229997" observedRunningTime="2026-02-03 13:41:09.564360072 +0000 UTC m=+5787.087435946" watchObservedRunningTime="2026-02-03 13:41:09.566737716 +0000 UTC m=+5787.089813570" Feb 03 13:41:11 crc kubenswrapper[4820]: I0203 13:41:11.599705 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ndckd"] Feb 03 13:41:11 crc kubenswrapper[4820]: I0203 13:41:11.630914 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndckd"] Feb 03 13:41:11 crc kubenswrapper[4820]: I0203 13:41:11.631038 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:11 crc kubenswrapper[4820]: I0203 13:41:11.747392 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f678c4c-076f-4011-860e-98d658b6484f-utilities\") pod \"redhat-marketplace-ndckd\" (UID: \"9f678c4c-076f-4011-860e-98d658b6484f\") " pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:11 crc kubenswrapper[4820]: I0203 13:41:11.747474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f678c4c-076f-4011-860e-98d658b6484f-catalog-content\") pod \"redhat-marketplace-ndckd\" (UID: \"9f678c4c-076f-4011-860e-98d658b6484f\") " pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:11 crc kubenswrapper[4820]: I0203 13:41:11.747963 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfmdl\" (UniqueName: \"kubernetes.io/projected/9f678c4c-076f-4011-860e-98d658b6484f-kube-api-access-nfmdl\") pod \"redhat-marketplace-ndckd\" (UID: \"9f678c4c-076f-4011-860e-98d658b6484f\") " pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:11 crc kubenswrapper[4820]: I0203 13:41:11.850523 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfmdl\" (UniqueName: \"kubernetes.io/projected/9f678c4c-076f-4011-860e-98d658b6484f-kube-api-access-nfmdl\") pod \"redhat-marketplace-ndckd\" (UID: \"9f678c4c-076f-4011-860e-98d658b6484f\") " pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:11 crc kubenswrapper[4820]: I0203 13:41:11.850689 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f678c4c-076f-4011-860e-98d658b6484f-utilities\") pod \"redhat-marketplace-ndckd\" (UID: \"9f678c4c-076f-4011-860e-98d658b6484f\") " pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:11 crc kubenswrapper[4820]: I0203 13:41:11.850725 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f678c4c-076f-4011-860e-98d658b6484f-catalog-content\") pod \"redhat-marketplace-ndckd\" (UID: \"9f678c4c-076f-4011-860e-98d658b6484f\") " pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:11 crc kubenswrapper[4820]: I0203 13:41:11.851802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f678c4c-076f-4011-860e-98d658b6484f-catalog-content\") pod \"redhat-marketplace-ndckd\" (UID: \"9f678c4c-076f-4011-860e-98d658b6484f\") " pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:11 crc kubenswrapper[4820]: I0203 13:41:11.851820 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f678c4c-076f-4011-860e-98d658b6484f-utilities\") pod \"redhat-marketplace-ndckd\" (UID: \"9f678c4c-076f-4011-860e-98d658b6484f\") " pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:11 crc kubenswrapper[4820]: I0203 13:41:11.874741 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfmdl\" (UniqueName: \"kubernetes.io/projected/9f678c4c-076f-4011-860e-98d658b6484f-kube-api-access-nfmdl\") pod \"redhat-marketplace-ndckd\" (UID: \"9f678c4c-076f-4011-860e-98d658b6484f\") " pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:11 crc kubenswrapper[4820]: I0203 13:41:11.971221 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:13 crc kubenswrapper[4820]: I0203 13:41:13.142654 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndckd"] Feb 03 13:41:13 crc kubenswrapper[4820]: I0203 13:41:13.596407 4820 generic.go:334] "Generic (PLEG): container finished" podID="9f678c4c-076f-4011-860e-98d658b6484f" containerID="2ad1c4601ffa240c29f19b3f556cd606bc6b7f12fb64e58a2365260c62b3ee6d" exitCode=0 Feb 03 13:41:13 crc kubenswrapper[4820]: I0203 13:41:13.596516 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndckd" event={"ID":"9f678c4c-076f-4011-860e-98d658b6484f","Type":"ContainerDied","Data":"2ad1c4601ffa240c29f19b3f556cd606bc6b7f12fb64e58a2365260c62b3ee6d"} Feb 03 13:41:13 crc kubenswrapper[4820]: I0203 13:41:13.596726 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndckd" event={"ID":"9f678c4c-076f-4011-860e-98d658b6484f","Type":"ContainerStarted","Data":"9ce2257ad55fbfadbdce66c49561a8db4e7301cae9f6badb2991fed126fa67e8"} Feb 03 13:41:13 crc kubenswrapper[4820]: I0203 13:41:13.892119 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gr579/crc-debug-rw72n"] Feb 03 13:41:13 crc kubenswrapper[4820]: I0203 13:41:13.893878 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/crc-debug-rw72n" Feb 03 13:41:13 crc kubenswrapper[4820]: I0203 13:41:13.901809 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gr579"/"default-dockercfg-bfr7f" Feb 03 13:41:14 crc kubenswrapper[4820]: I0203 13:41:14.010883 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76eab5e1-d01e-4718-8149-fb1e0150fb82-host\") pod \"crc-debug-rw72n\" (UID: \"76eab5e1-d01e-4718-8149-fb1e0150fb82\") " pod="openshift-must-gather-gr579/crc-debug-rw72n" Feb 03 13:41:14 crc kubenswrapper[4820]: I0203 13:41:14.011572 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm4w6\" (UniqueName: \"kubernetes.io/projected/76eab5e1-d01e-4718-8149-fb1e0150fb82-kube-api-access-zm4w6\") pod \"crc-debug-rw72n\" (UID: \"76eab5e1-d01e-4718-8149-fb1e0150fb82\") " pod="openshift-must-gather-gr579/crc-debug-rw72n" Feb 03 13:41:14 crc kubenswrapper[4820]: I0203 13:41:14.113982 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm4w6\" (UniqueName: \"kubernetes.io/projected/76eab5e1-d01e-4718-8149-fb1e0150fb82-kube-api-access-zm4w6\") pod \"crc-debug-rw72n\" (UID: \"76eab5e1-d01e-4718-8149-fb1e0150fb82\") " pod="openshift-must-gather-gr579/crc-debug-rw72n" Feb 03 13:41:14 crc kubenswrapper[4820]: I0203 13:41:14.114052 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76eab5e1-d01e-4718-8149-fb1e0150fb82-host\") pod \"crc-debug-rw72n\" (UID: \"76eab5e1-d01e-4718-8149-fb1e0150fb82\") " pod="openshift-must-gather-gr579/crc-debug-rw72n" Feb 03 13:41:14 crc kubenswrapper[4820]: I0203 13:41:14.114335 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76eab5e1-d01e-4718-8149-fb1e0150fb82-host\") pod \"crc-debug-rw72n\" (UID: \"76eab5e1-d01e-4718-8149-fb1e0150fb82\") " pod="openshift-must-gather-gr579/crc-debug-rw72n" Feb 03 13:41:14 crc kubenswrapper[4820]: I0203 13:41:14.155760 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm4w6\" (UniqueName: \"kubernetes.io/projected/76eab5e1-d01e-4718-8149-fb1e0150fb82-kube-api-access-zm4w6\") pod \"crc-debug-rw72n\" (UID: \"76eab5e1-d01e-4718-8149-fb1e0150fb82\") " pod="openshift-must-gather-gr579/crc-debug-rw72n" Feb 03 13:41:14 crc kubenswrapper[4820]: I0203 13:41:14.219408 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/crc-debug-rw72n" Feb 03 13:41:14 crc kubenswrapper[4820]: W0203 13:41:14.263411 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76eab5e1_d01e_4718_8149_fb1e0150fb82.slice/crio-c2a68c91e40c29c296b021aea397e3a74da133ef15b5eb366730dc447f69969e WatchSource:0}: Error finding container c2a68c91e40c29c296b021aea397e3a74da133ef15b5eb366730dc447f69969e: Status 404 returned error can't find the container with id c2a68c91e40c29c296b021aea397e3a74da133ef15b5eb366730dc447f69969e Feb 03 13:41:14 crc kubenswrapper[4820]: I0203 13:41:14.607437 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr579/crc-debug-rw72n" event={"ID":"76eab5e1-d01e-4718-8149-fb1e0150fb82","Type":"ContainerStarted","Data":"c2a68c91e40c29c296b021aea397e3a74da133ef15b5eb366730dc447f69969e"} Feb 03 13:41:15 crc kubenswrapper[4820]: I0203 13:41:15.623274 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndckd" event={"ID":"9f678c4c-076f-4011-860e-98d658b6484f","Type":"ContainerStarted","Data":"33b3bc9948a8e1d01826c4f9be4fd1fced3cdd725276e633ecd3c3475ec67b45"} Feb 03 13:41:16 crc kubenswrapper[4820]: I0203 13:41:16.634558 4820 generic.go:334] "Generic (PLEG): container finished" podID="9f678c4c-076f-4011-860e-98d658b6484f" containerID="33b3bc9948a8e1d01826c4f9be4fd1fced3cdd725276e633ecd3c3475ec67b45" exitCode=0 Feb 03 13:41:16 crc kubenswrapper[4820]: I0203 13:41:16.634998 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndckd" event={"ID":"9f678c4c-076f-4011-860e-98d658b6484f","Type":"ContainerDied","Data":"33b3bc9948a8e1d01826c4f9be4fd1fced3cdd725276e633ecd3c3475ec67b45"} Feb 03 13:41:17 crc kubenswrapper[4820]: I0203 13:41:17.651228 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndckd" event={"ID":"9f678c4c-076f-4011-860e-98d658b6484f","Type":"ContainerStarted","Data":"e3b2304a8347f5604d6bcf77b15e5d81b900eb1345257f1c9d0ea02993031a73"} Feb 03 13:41:17 crc kubenswrapper[4820]: I0203 13:41:17.670584 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ndckd" podStartSLOduration=3.064134447 podStartE2EDuration="6.670559122s" podCreationTimestamp="2026-02-03 13:41:11 +0000 UTC" firstStartedPulling="2026-02-03 13:41:13.598164775 +0000 UTC m=+5791.121240639" lastFinishedPulling="2026-02-03 13:41:17.20458945 +0000 UTC m=+5794.727665314" observedRunningTime="2026-02-03 13:41:17.669244527 +0000 UTC m=+5795.192320401" watchObservedRunningTime="2026-02-03 13:41:17.670559122 +0000 UTC m=+5795.193634986" Feb 03 13:41:21 crc kubenswrapper[4820]: I0203 13:41:21.972025 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:21 crc kubenswrapper[4820]: I0203 13:41:21.972612 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:23 crc kubenswrapper[4820]: I0203 13:41:23.039600 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-ndckd" podUID="9f678c4c-076f-4011-860e-98d658b6484f" containerName="registry-server" probeResult="failure" output=< Feb 03 13:41:23 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 13:41:23 crc kubenswrapper[4820]: > Feb 03 13:41:26 crc kubenswrapper[4820]: I0203 13:41:26.740330 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr579/crc-debug-rw72n" event={"ID":"76eab5e1-d01e-4718-8149-fb1e0150fb82","Type":"ContainerStarted","Data":"f082e366f8b604cd9b7ebcb23f8ef2298dc3a62405a760c692b8f3f2031a871c"} Feb 03 13:41:26 crc kubenswrapper[4820]: I0203 13:41:26.759780 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gr579/crc-debug-rw72n" podStartSLOduration=2.03135931 podStartE2EDuration="13.759761949s" podCreationTimestamp="2026-02-03 13:41:13 +0000 UTC" firstStartedPulling="2026-02-03 13:41:14.267474283 +0000 UTC m=+5791.790550147" lastFinishedPulling="2026-02-03 13:41:25.995876912 +0000 UTC m=+5803.518952786" observedRunningTime="2026-02-03 13:41:26.756699587 +0000 UTC m=+5804.279775451" watchObservedRunningTime="2026-02-03 13:41:26.759761949 +0000 UTC m=+5804.282837813" Feb 03 13:41:31 crc kubenswrapper[4820]: I0203 13:41:31.365168 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:41:31 crc kubenswrapper[4820]: I0203 13:41:31.365749 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:41:31 crc kubenswrapper[4820]: I0203 13:41:31.365811 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 13:41:31 crc kubenswrapper[4820]: I0203 13:41:31.366810 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:41:31 crc kubenswrapper[4820]: I0203 13:41:31.366883 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" gracePeriod=600 Feb 03 13:41:31 crc kubenswrapper[4820]: E0203 13:41:31.973739 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:41:32 crc kubenswrapper[4820]: I0203 13:41:32.294552 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:32 crc kubenswrapper[4820]: I0203 13:41:32.387273 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:32 crc kubenswrapper[4820]: I0203 13:41:32.883277 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" exitCode=0 Feb 03 13:41:32 crc kubenswrapper[4820]: I0203 13:41:32.883349 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a"} Feb 03 13:41:32 crc kubenswrapper[4820]: I0203 13:41:32.883401 4820 scope.go:117] "RemoveContainer" containerID="96a8f441702c95476ee43f5c9515fcaa27b65c88d6d6e74c633083ade42e9416" Feb 03 13:41:32 crc kubenswrapper[4820]: I0203 13:41:32.885388 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:41:32 crc kubenswrapper[4820]: E0203 13:41:32.885799 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:41:33 crc kubenswrapper[4820]: I0203 13:41:33.344772 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndckd"] Feb 03 13:41:33 crc kubenswrapper[4820]: I0203 13:41:33.904529 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ndckd" podUID="9f678c4c-076f-4011-860e-98d658b6484f" containerName="registry-server" containerID="cri-o://e3b2304a8347f5604d6bcf77b15e5d81b900eb1345257f1c9d0ea02993031a73" gracePeriod=2 Feb 03 13:41:34 crc kubenswrapper[4820]: I0203 13:41:34.916847 4820 generic.go:334] "Generic (PLEG): container finished" podID="9f678c4c-076f-4011-860e-98d658b6484f" containerID="e3b2304a8347f5604d6bcf77b15e5d81b900eb1345257f1c9d0ea02993031a73" exitCode=0 Feb 03 13:41:34 crc kubenswrapper[4820]: I0203 13:41:34.916929 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndckd" event={"ID":"9f678c4c-076f-4011-860e-98d658b6484f","Type":"ContainerDied","Data":"e3b2304a8347f5604d6bcf77b15e5d81b900eb1345257f1c9d0ea02993031a73"} Feb 03 13:41:40 crc kubenswrapper[4820]: I0203 13:41:40.898149 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.009246 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f678c4c-076f-4011-860e-98d658b6484f-catalog-content\") pod \"9f678c4c-076f-4011-860e-98d658b6484f\" (UID: \"9f678c4c-076f-4011-860e-98d658b6484f\") " Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.009556 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfmdl\" (UniqueName: \"kubernetes.io/projected/9f678c4c-076f-4011-860e-98d658b6484f-kube-api-access-nfmdl\") pod \"9f678c4c-076f-4011-860e-98d658b6484f\" (UID: \"9f678c4c-076f-4011-860e-98d658b6484f\") " Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.009596 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f678c4c-076f-4011-860e-98d658b6484f-utilities\") pod \"9f678c4c-076f-4011-860e-98d658b6484f\" (UID: \"9f678c4c-076f-4011-860e-98d658b6484f\") " Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.011393 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f678c4c-076f-4011-860e-98d658b6484f-utilities" (OuterVolumeSpecName: "utilities") pod "9f678c4c-076f-4011-860e-98d658b6484f" (UID: "9f678c4c-076f-4011-860e-98d658b6484f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.023300 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f678c4c-076f-4011-860e-98d658b6484f-kube-api-access-nfmdl" (OuterVolumeSpecName: "kube-api-access-nfmdl") pod "9f678c4c-076f-4011-860e-98d658b6484f" (UID: "9f678c4c-076f-4011-860e-98d658b6484f"). InnerVolumeSpecName "kube-api-access-nfmdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.027989 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f678c4c-076f-4011-860e-98d658b6484f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f678c4c-076f-4011-860e-98d658b6484f" (UID: "9f678c4c-076f-4011-860e-98d658b6484f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.075144 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndckd" event={"ID":"9f678c4c-076f-4011-860e-98d658b6484f","Type":"ContainerDied","Data":"9ce2257ad55fbfadbdce66c49561a8db4e7301cae9f6badb2991fed126fa67e8"} Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.075212 4820 scope.go:117] "RemoveContainer" containerID="e3b2304a8347f5604d6bcf77b15e5d81b900eb1345257f1c9d0ea02993031a73" Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.075414 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ndckd" Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.106748 4820 scope.go:117] "RemoveContainer" containerID="33b3bc9948a8e1d01826c4f9be4fd1fced3cdd725276e633ecd3c3475ec67b45" Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.114174 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f678c4c-076f-4011-860e-98d658b6484f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.114205 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfmdl\" (UniqueName: \"kubernetes.io/projected/9f678c4c-076f-4011-860e-98d658b6484f-kube-api-access-nfmdl\") on node \"crc\" DevicePath \"\"" Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.114221 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f678c4c-076f-4011-860e-98d658b6484f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.117473 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndckd"] Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.129188 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndckd"] Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.137719 4820 scope.go:117] "RemoveContainer" containerID="2ad1c4601ffa240c29f19b3f556cd606bc6b7f12fb64e58a2365260c62b3ee6d" Feb 03 13:41:41 crc kubenswrapper[4820]: I0203 13:41:41.163382 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f678c4c-076f-4011-860e-98d658b6484f" path="/var/lib/kubelet/pods/9f678c4c-076f-4011-860e-98d658b6484f/volumes" Feb 03 13:41:45 crc kubenswrapper[4820]: I0203 13:41:45.143881 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:41:45 crc kubenswrapper[4820]: E0203 13:41:45.145179 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:41:59 crc kubenswrapper[4820]: I0203 13:41:59.144818 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:41:59 crc kubenswrapper[4820]: E0203 13:41:59.145492 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:42:14 crc kubenswrapper[4820]: I0203 13:42:14.142448 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:42:14 crc kubenswrapper[4820]: E0203 13:42:14.143254 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:42:24 crc kubenswrapper[4820]: I0203 13:42:24.552482 4820 generic.go:334] "Generic (PLEG): container finished" podID="76eab5e1-d01e-4718-8149-fb1e0150fb82" containerID="f082e366f8b604cd9b7ebcb23f8ef2298dc3a62405a760c692b8f3f2031a871c" exitCode=0 Feb 03 13:42:24 crc kubenswrapper[4820]: I0203 13:42:24.552669 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr579/crc-debug-rw72n" event={"ID":"76eab5e1-d01e-4718-8149-fb1e0150fb82","Type":"ContainerDied","Data":"f082e366f8b604cd9b7ebcb23f8ef2298dc3a62405a760c692b8f3f2031a871c"} Feb 03 13:42:25 crc kubenswrapper[4820]: I0203 13:42:25.709405 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/crc-debug-rw72n" Feb 03 13:42:25 crc kubenswrapper[4820]: I0203 13:42:25.761949 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gr579/crc-debug-rw72n"] Feb 03 13:42:25 crc kubenswrapper[4820]: I0203 13:42:25.769025 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gr579/crc-debug-rw72n"] Feb 03 13:42:25 crc kubenswrapper[4820]: I0203 13:42:25.857239 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm4w6\" (UniqueName: \"kubernetes.io/projected/76eab5e1-d01e-4718-8149-fb1e0150fb82-kube-api-access-zm4w6\") pod \"76eab5e1-d01e-4718-8149-fb1e0150fb82\" (UID: \"76eab5e1-d01e-4718-8149-fb1e0150fb82\") " Feb 03 13:42:25 crc kubenswrapper[4820]: I0203 13:42:25.857359 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76eab5e1-d01e-4718-8149-fb1e0150fb82-host\") pod \"76eab5e1-d01e-4718-8149-fb1e0150fb82\" (UID: \"76eab5e1-d01e-4718-8149-fb1e0150fb82\") " Feb 03 13:42:25 crc kubenswrapper[4820]: I0203 13:42:25.857468 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76eab5e1-d01e-4718-8149-fb1e0150fb82-host" (OuterVolumeSpecName: "host") pod "76eab5e1-d01e-4718-8149-fb1e0150fb82" (UID: "76eab5e1-d01e-4718-8149-fb1e0150fb82"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:42:25 crc kubenswrapper[4820]: I0203 13:42:25.858094 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76eab5e1-d01e-4718-8149-fb1e0150fb82-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:42:25 crc kubenswrapper[4820]: I0203 13:42:25.863503 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76eab5e1-d01e-4718-8149-fb1e0150fb82-kube-api-access-zm4w6" (OuterVolumeSpecName: "kube-api-access-zm4w6") pod "76eab5e1-d01e-4718-8149-fb1e0150fb82" (UID: "76eab5e1-d01e-4718-8149-fb1e0150fb82"). InnerVolumeSpecName "kube-api-access-zm4w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:42:25 crc kubenswrapper[4820]: I0203 13:42:25.960542 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm4w6\" (UniqueName: \"kubernetes.io/projected/76eab5e1-d01e-4718-8149-fb1e0150fb82-kube-api-access-zm4w6\") on node \"crc\" DevicePath \"\"" Feb 03 13:42:26 crc kubenswrapper[4820]: I0203 13:42:26.593175 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2a68c91e40c29c296b021aea397e3a74da133ef15b5eb366730dc447f69969e" Feb 03 13:42:26 crc kubenswrapper[4820]: I0203 13:42:26.593312 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/crc-debug-rw72n" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.002881 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gr579/crc-debug-nf6dt"] Feb 03 13:42:27 crc kubenswrapper[4820]: E0203 13:42:27.003560 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76eab5e1-d01e-4718-8149-fb1e0150fb82" containerName="container-00" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.003578 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="76eab5e1-d01e-4718-8149-fb1e0150fb82" containerName="container-00" Feb 03 13:42:27 crc kubenswrapper[4820]: E0203 13:42:27.003609 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f678c4c-076f-4011-860e-98d658b6484f" containerName="registry-server" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.003620 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f678c4c-076f-4011-860e-98d658b6484f" containerName="registry-server" Feb 03 13:42:27 crc kubenswrapper[4820]: E0203 13:42:27.003671 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f678c4c-076f-4011-860e-98d658b6484f" containerName="extract-content" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.003684 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f678c4c-076f-4011-860e-98d658b6484f" containerName="extract-content" Feb 03 13:42:27 crc kubenswrapper[4820]: E0203 13:42:27.003711 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f678c4c-076f-4011-860e-98d658b6484f" containerName="extract-utilities" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.003721 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f678c4c-076f-4011-860e-98d658b6484f" containerName="extract-utilities" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.004043 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="76eab5e1-d01e-4718-8149-fb1e0150fb82" containerName="container-00" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.004089 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f678c4c-076f-4011-860e-98d658b6484f" containerName="registry-server" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.005181 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/crc-debug-nf6dt" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.008125 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gr579"/"default-dockercfg-bfr7f" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.085136 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhhlj\" (UniqueName: \"kubernetes.io/projected/2cb6c57b-6275-483f-b449-4338752ccadf-kube-api-access-hhhlj\") pod \"crc-debug-nf6dt\" (UID: \"2cb6c57b-6275-483f-b449-4338752ccadf\") " pod="openshift-must-gather-gr579/crc-debug-nf6dt" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.085208 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cb6c57b-6275-483f-b449-4338752ccadf-host\") pod \"crc-debug-nf6dt\" (UID: \"2cb6c57b-6275-483f-b449-4338752ccadf\") " pod="openshift-must-gather-gr579/crc-debug-nf6dt" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.143743 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:42:27 crc kubenswrapper[4820]: E0203 13:42:27.144162 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.161879 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76eab5e1-d01e-4718-8149-fb1e0150fb82" path="/var/lib/kubelet/pods/76eab5e1-d01e-4718-8149-fb1e0150fb82/volumes" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.188284 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhhlj\" (UniqueName: \"kubernetes.io/projected/2cb6c57b-6275-483f-b449-4338752ccadf-kube-api-access-hhhlj\") pod \"crc-debug-nf6dt\" (UID: \"2cb6c57b-6275-483f-b449-4338752ccadf\") " pod="openshift-must-gather-gr579/crc-debug-nf6dt" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.188361 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cb6c57b-6275-483f-b449-4338752ccadf-host\") pod \"crc-debug-nf6dt\" (UID: \"2cb6c57b-6275-483f-b449-4338752ccadf\") " pod="openshift-must-gather-gr579/crc-debug-nf6dt" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.188491 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cb6c57b-6275-483f-b449-4338752ccadf-host\") pod \"crc-debug-nf6dt\" (UID: \"2cb6c57b-6275-483f-b449-4338752ccadf\") " pod="openshift-must-gather-gr579/crc-debug-nf6dt" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.209222 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhhlj\" (UniqueName: \"kubernetes.io/projected/2cb6c57b-6275-483f-b449-4338752ccadf-kube-api-access-hhhlj\") pod \"crc-debug-nf6dt\" (UID: \"2cb6c57b-6275-483f-b449-4338752ccadf\") " pod="openshift-must-gather-gr579/crc-debug-nf6dt" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.325965 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/crc-debug-nf6dt" Feb 03 13:42:27 crc kubenswrapper[4820]: I0203 13:42:27.605149 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr579/crc-debug-nf6dt" event={"ID":"2cb6c57b-6275-483f-b449-4338752ccadf","Type":"ContainerStarted","Data":"129cedeb61d68869adf5bf5a18d651dc44cb0cf1c125cf2def233a2f889db3a7"} Feb 03 13:42:28 crc kubenswrapper[4820]: I0203 13:42:28.622644 4820 generic.go:334] "Generic (PLEG): container finished" podID="2cb6c57b-6275-483f-b449-4338752ccadf" containerID="7f792cc2463c42152b30f3c0ec82a36260e11b3d6099bbf285a22f2710a85928" exitCode=0 Feb 03 13:42:28 crc kubenswrapper[4820]: I0203 13:42:28.623101 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr579/crc-debug-nf6dt" event={"ID":"2cb6c57b-6275-483f-b449-4338752ccadf","Type":"ContainerDied","Data":"7f792cc2463c42152b30f3c0ec82a36260e11b3d6099bbf285a22f2710a85928"} Feb 03 13:42:29 crc kubenswrapper[4820]: I0203 13:42:29.733410 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/crc-debug-nf6dt" Feb 03 13:42:29 crc kubenswrapper[4820]: I0203 13:42:29.841211 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cb6c57b-6275-483f-b449-4338752ccadf-host\") pod \"2cb6c57b-6275-483f-b449-4338752ccadf\" (UID: \"2cb6c57b-6275-483f-b449-4338752ccadf\") " Feb 03 13:42:29 crc kubenswrapper[4820]: I0203 13:42:29.841323 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhhlj\" (UniqueName: \"kubernetes.io/projected/2cb6c57b-6275-483f-b449-4338752ccadf-kube-api-access-hhhlj\") pod \"2cb6c57b-6275-483f-b449-4338752ccadf\" (UID: \"2cb6c57b-6275-483f-b449-4338752ccadf\") " Feb 03 13:42:29 crc kubenswrapper[4820]: I0203 13:42:29.841327 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cb6c57b-6275-483f-b449-4338752ccadf-host" (OuterVolumeSpecName: "host") pod "2cb6c57b-6275-483f-b449-4338752ccadf" (UID: "2cb6c57b-6275-483f-b449-4338752ccadf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:42:29 crc kubenswrapper[4820]: I0203 13:42:29.841875 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cb6c57b-6275-483f-b449-4338752ccadf-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:42:29 crc kubenswrapper[4820]: I0203 13:42:29.854340 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb6c57b-6275-483f-b449-4338752ccadf-kube-api-access-hhhlj" (OuterVolumeSpecName: "kube-api-access-hhhlj") pod "2cb6c57b-6275-483f-b449-4338752ccadf" (UID: "2cb6c57b-6275-483f-b449-4338752ccadf"). InnerVolumeSpecName "kube-api-access-hhhlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:42:29 crc kubenswrapper[4820]: I0203 13:42:29.943584 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhhlj\" (UniqueName: \"kubernetes.io/projected/2cb6c57b-6275-483f-b449-4338752ccadf-kube-api-access-hhhlj\") on node \"crc\" DevicePath \"\"" Feb 03 13:42:30 crc kubenswrapper[4820]: I0203 13:42:30.650390 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr579/crc-debug-nf6dt" event={"ID":"2cb6c57b-6275-483f-b449-4338752ccadf","Type":"ContainerDied","Data":"129cedeb61d68869adf5bf5a18d651dc44cb0cf1c125cf2def233a2f889db3a7"} Feb 03 13:42:30 crc kubenswrapper[4820]: I0203 13:42:30.650739 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="129cedeb61d68869adf5bf5a18d651dc44cb0cf1c125cf2def233a2f889db3a7" Feb 03 13:42:30 crc kubenswrapper[4820]: I0203 13:42:30.650439 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/crc-debug-nf6dt" Feb 03 13:42:30 crc kubenswrapper[4820]: I0203 13:42:30.818818 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gr579/crc-debug-nf6dt"] Feb 03 13:42:30 crc kubenswrapper[4820]: I0203 13:42:30.830011 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gr579/crc-debug-nf6dt"] Feb 03 13:42:31 crc kubenswrapper[4820]: I0203 13:42:31.156021 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cb6c57b-6275-483f-b449-4338752ccadf" path="/var/lib/kubelet/pods/2cb6c57b-6275-483f-b449-4338752ccadf/volumes" Feb 03 13:42:32 crc kubenswrapper[4820]: I0203 13:42:32.247618 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gr579/crc-debug-bbxd2"] Feb 03 13:42:32 crc kubenswrapper[4820]: E0203 13:42:32.249481 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb6c57b-6275-483f-b449-4338752ccadf" containerName="container-00" Feb 03 13:42:32 crc kubenswrapper[4820]: I0203 13:42:32.249594 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb6c57b-6275-483f-b449-4338752ccadf" containerName="container-00" Feb 03 13:42:32 crc kubenswrapper[4820]: I0203 13:42:32.249910 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb6c57b-6275-483f-b449-4338752ccadf" containerName="container-00" Feb 03 13:42:32 crc kubenswrapper[4820]: I0203 13:42:32.251307 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/crc-debug-bbxd2" Feb 03 13:42:32 crc kubenswrapper[4820]: I0203 13:42:32.253768 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gr579"/"default-dockercfg-bfr7f" Feb 03 13:42:32 crc kubenswrapper[4820]: I0203 13:42:32.295035 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ca13962-a392-46c0-b65d-22f0bb9abb3c-host\") pod \"crc-debug-bbxd2\" (UID: \"1ca13962-a392-46c0-b65d-22f0bb9abb3c\") " pod="openshift-must-gather-gr579/crc-debug-bbxd2" Feb 03 13:42:32 crc kubenswrapper[4820]: I0203 13:42:32.295125 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwt5q\" (UniqueName: \"kubernetes.io/projected/1ca13962-a392-46c0-b65d-22f0bb9abb3c-kube-api-access-lwt5q\") pod \"crc-debug-bbxd2\" (UID: \"1ca13962-a392-46c0-b65d-22f0bb9abb3c\") " pod="openshift-must-gather-gr579/crc-debug-bbxd2" Feb 03 13:42:32 crc kubenswrapper[4820]: I0203 13:42:32.397296 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ca13962-a392-46c0-b65d-22f0bb9abb3c-host\") pod \"crc-debug-bbxd2\" (UID: \"1ca13962-a392-46c0-b65d-22f0bb9abb3c\") " pod="openshift-must-gather-gr579/crc-debug-bbxd2" Feb 03 13:42:32 crc kubenswrapper[4820]: I0203 13:42:32.397411 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwt5q\" (UniqueName: \"kubernetes.io/projected/1ca13962-a392-46c0-b65d-22f0bb9abb3c-kube-api-access-lwt5q\") pod \"crc-debug-bbxd2\" (UID: \"1ca13962-a392-46c0-b65d-22f0bb9abb3c\") " pod="openshift-must-gather-gr579/crc-debug-bbxd2" Feb 03 13:42:32 crc kubenswrapper[4820]: I0203 13:42:32.398444 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ca13962-a392-46c0-b65d-22f0bb9abb3c-host\") pod \"crc-debug-bbxd2\" (UID: \"1ca13962-a392-46c0-b65d-22f0bb9abb3c\") " pod="openshift-must-gather-gr579/crc-debug-bbxd2" Feb 03 13:42:32 crc kubenswrapper[4820]: I0203 13:42:32.419804 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwt5q\" (UniqueName: \"kubernetes.io/projected/1ca13962-a392-46c0-b65d-22f0bb9abb3c-kube-api-access-lwt5q\") pod \"crc-debug-bbxd2\" (UID: \"1ca13962-a392-46c0-b65d-22f0bb9abb3c\") " pod="openshift-must-gather-gr579/crc-debug-bbxd2" Feb 03 13:42:32 crc kubenswrapper[4820]: I0203 13:42:32.570501 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/crc-debug-bbxd2" Feb 03 13:42:32 crc kubenswrapper[4820]: I0203 13:42:32.680479 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr579/crc-debug-bbxd2" event={"ID":"1ca13962-a392-46c0-b65d-22f0bb9abb3c","Type":"ContainerStarted","Data":"71ff43f324d92ab042c04b88d8a2ba53afa2ded34e3c2c0028d9e1332452b677"} Feb 03 13:42:33 crc kubenswrapper[4820]: I0203 13:42:33.692619 4820 generic.go:334] "Generic (PLEG): container finished" podID="1ca13962-a392-46c0-b65d-22f0bb9abb3c" containerID="cca6c3a0507c8883f1fc1f9fbcc804547b41022e172427db595e9d48178dda54" exitCode=0 Feb 03 13:42:33 crc kubenswrapper[4820]: I0203 13:42:33.692746 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr579/crc-debug-bbxd2" event={"ID":"1ca13962-a392-46c0-b65d-22f0bb9abb3c","Type":"ContainerDied","Data":"cca6c3a0507c8883f1fc1f9fbcc804547b41022e172427db595e9d48178dda54"} Feb 03 13:42:33 crc kubenswrapper[4820]: I0203 13:42:33.735835 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gr579/crc-debug-bbxd2"] Feb 03 13:42:33 crc kubenswrapper[4820]: I0203 13:42:33.744280 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gr579/crc-debug-bbxd2"] Feb 03 13:42:34 crc kubenswrapper[4820]: I0203 13:42:34.823376 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/crc-debug-bbxd2" Feb 03 13:42:34 crc kubenswrapper[4820]: I0203 13:42:34.855607 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ca13962-a392-46c0-b65d-22f0bb9abb3c-host\") pod \"1ca13962-a392-46c0-b65d-22f0bb9abb3c\" (UID: \"1ca13962-a392-46c0-b65d-22f0bb9abb3c\") " Feb 03 13:42:34 crc kubenswrapper[4820]: I0203 13:42:34.855755 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ca13962-a392-46c0-b65d-22f0bb9abb3c-host" (OuterVolumeSpecName: "host") pod "1ca13962-a392-46c0-b65d-22f0bb9abb3c" (UID: "1ca13962-a392-46c0-b65d-22f0bb9abb3c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:42:34 crc kubenswrapper[4820]: I0203 13:42:34.855873 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwt5q\" (UniqueName: \"kubernetes.io/projected/1ca13962-a392-46c0-b65d-22f0bb9abb3c-kube-api-access-lwt5q\") pod \"1ca13962-a392-46c0-b65d-22f0bb9abb3c\" (UID: \"1ca13962-a392-46c0-b65d-22f0bb9abb3c\") " Feb 03 13:42:34 crc kubenswrapper[4820]: I0203 13:42:34.856388 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ca13962-a392-46c0-b65d-22f0bb9abb3c-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:42:34 crc kubenswrapper[4820]: I0203 13:42:34.861920 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ca13962-a392-46c0-b65d-22f0bb9abb3c-kube-api-access-lwt5q" (OuterVolumeSpecName: "kube-api-access-lwt5q") pod "1ca13962-a392-46c0-b65d-22f0bb9abb3c" (UID: "1ca13962-a392-46c0-b65d-22f0bb9abb3c"). InnerVolumeSpecName "kube-api-access-lwt5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:42:34 crc kubenswrapper[4820]: I0203 13:42:34.958985 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwt5q\" (UniqueName: \"kubernetes.io/projected/1ca13962-a392-46c0-b65d-22f0bb9abb3c-kube-api-access-lwt5q\") on node \"crc\" DevicePath \"\"" Feb 03 13:42:35 crc kubenswrapper[4820]: I0203 13:42:35.154936 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ca13962-a392-46c0-b65d-22f0bb9abb3c" path="/var/lib/kubelet/pods/1ca13962-a392-46c0-b65d-22f0bb9abb3c/volumes" Feb 03 13:42:35 crc kubenswrapper[4820]: I0203 13:42:35.714927 4820 scope.go:117] "RemoveContainer" containerID="cca6c3a0507c8883f1fc1f9fbcc804547b41022e172427db595e9d48178dda54" Feb 03 13:42:35 crc kubenswrapper[4820]: I0203 13:42:35.715000 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/crc-debug-bbxd2" Feb 03 13:42:39 crc kubenswrapper[4820]: I0203 13:42:39.142773 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:42:39 crc kubenswrapper[4820]: E0203 13:42:39.144770 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:42:54 crc kubenswrapper[4820]: I0203 13:42:54.143131 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:42:54 crc kubenswrapper[4820]: E0203 13:42:54.144011 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:43:04 crc kubenswrapper[4820]: I0203 13:43:04.879723 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-fdff74856-dfqrf_5229e26a-15af-47fd-bb4a-956968711984/barbican-api/0.log" Feb 03 13:43:04 crc kubenswrapper[4820]: I0203 13:43:04.883643 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-fdff74856-dfqrf_5229e26a-15af-47fd-bb4a-956968711984/barbican-api-log/0.log" Feb 03 13:43:05 crc kubenswrapper[4820]: I0203 13:43:05.079847 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-775b8c5454-c9g7t_86a0d38b-74e6-4528-9dae-af9c8400555d/barbican-keystone-listener/0.log" Feb 03 13:43:05 crc kubenswrapper[4820]: I0203 13:43:05.111312 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-775b8c5454-c9g7t_86a0d38b-74e6-4528-9dae-af9c8400555d/barbican-keystone-listener-log/0.log" Feb 03 13:43:05 crc kubenswrapper[4820]: I0203 13:43:05.164406 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-659d874887-6h95b_410ba29a-39b4-4468-837d-8b38a94d638d/barbican-worker/0.log" Feb 03 13:43:05 crc kubenswrapper[4820]: I0203 13:43:05.276470 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-659d874887-6h95b_410ba29a-39b4-4468-837d-8b38a94d638d/barbican-worker-log/0.log" Feb 03 13:43:05 crc kubenswrapper[4820]: I0203 13:43:05.513565 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl_24c4a250-4fa9-42c6-a3bd-e626d0adc807/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:05 crc kubenswrapper[4820]: I0203 13:43:05.669419 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fcf87510-64cf-492b-bd2c-560f6ddc0ee2/ceilometer-central-agent/0.log" Feb 03 13:43:05 crc kubenswrapper[4820]: I0203 13:43:05.736684 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fcf87510-64cf-492b-bd2c-560f6ddc0ee2/ceilometer-notification-agent/0.log" Feb 03 13:43:05 crc kubenswrapper[4820]: I0203 13:43:05.756181 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fcf87510-64cf-492b-bd2c-560f6ddc0ee2/proxy-httpd/0.log" Feb 03 13:43:05 crc kubenswrapper[4820]: I0203 13:43:05.803658 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fcf87510-64cf-492b-bd2c-560f6ddc0ee2/sg-core/0.log" Feb 03 13:43:05 crc kubenswrapper[4820]: I0203 13:43:05.968865 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_32b101cf-4d79-44f8-a591-dd5c74df5af6/cinder-api/0.log" Feb 03 13:43:06 crc kubenswrapper[4820]: I0203 13:43:06.012617 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_32b101cf-4d79-44f8-a591-dd5c74df5af6/cinder-api-log/0.log" Feb 03 13:43:06 crc kubenswrapper[4820]: I0203 13:43:06.203484 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2de9875d-8142-41a2-80b3-74a66ef53e07/cinder-scheduler/0.log" Feb 03 13:43:06 crc kubenswrapper[4820]: I0203 13:43:06.228350 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2de9875d-8142-41a2-80b3-74a66ef53e07/probe/0.log" Feb 03 13:43:06 crc kubenswrapper[4820]: I0203 13:43:06.279267 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw_fc5454df-b4c1-45f5-9021-a70a13b47b37/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:06 crc kubenswrapper[4820]: I0203 13:43:06.459271 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7_126074cf-7213-48ec-8909-5a8286bb11b6/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:06 crc kubenswrapper[4820]: I0203 13:43:06.512838 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-kz5f5_a42a2742-e704-482e-ac37-5c948277f576/init/0.log" Feb 03 13:43:06 crc kubenswrapper[4820]: I0203 13:43:06.671522 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-kz5f5_a42a2742-e704-482e-ac37-5c948277f576/init/0.log" Feb 03 13:43:06 crc kubenswrapper[4820]: I0203 13:43:06.783016 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7_c7b75829-d001-4e04-9850-44e986677f48/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:06 crc kubenswrapper[4820]: I0203 13:43:06.885201 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-kz5f5_a42a2742-e704-482e-ac37-5c948277f576/dnsmasq-dns/0.log" Feb 03 13:43:07 crc kubenswrapper[4820]: I0203 13:43:07.041261 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_51339dae-75ae-4857-853e-d4d0a0a1aa65/glance-log/0.log" Feb 03 13:43:07 crc kubenswrapper[4820]: I0203 13:43:07.045543 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_51339dae-75ae-4857-853e-d4d0a0a1aa65/glance-httpd/0.log" Feb 03 13:43:07 crc kubenswrapper[4820]: I0203 13:43:07.144531 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:43:07 crc kubenswrapper[4820]: E0203 13:43:07.145043 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:43:07 crc kubenswrapper[4820]: I0203 13:43:07.417433 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_227e62a0-37fd-4e52-ae44-df01b13d4b32/glance-log/0.log" Feb 03 13:43:07 crc kubenswrapper[4820]: I0203 13:43:07.439874 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_227e62a0-37fd-4e52-ae44-df01b13d4b32/glance-httpd/0.log" Feb 03 13:43:07 crc kubenswrapper[4820]: I0203 13:43:07.761316 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-68b4df5bdd-tdb9h_308562dd-6078-4c1c-a4e0-c01a60a2d81d/horizon/3.log" Feb 03 13:43:07 crc kubenswrapper[4820]: I0203 13:43:07.773381 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-68b4df5bdd-tdb9h_308562dd-6078-4c1c-a4e0-c01a60a2d81d/horizon/4.log" Feb 03 13:43:08 crc kubenswrapper[4820]: I0203 13:43:08.303793 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv_27a58bb7-ce09-4c16-b190-071c1c506a14/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:08 crc kubenswrapper[4820]: I0203 13:43:08.421483 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-7qz9r_9311424c-1f4a-434d-8e8c-e5383453074c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:08 crc kubenswrapper[4820]: I0203 13:43:08.560010 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-68b4df5bdd-tdb9h_308562dd-6078-4c1c-a4e0-c01a60a2d81d/horizon-log/0.log" Feb 03 13:43:08 crc kubenswrapper[4820]: I0203 13:43:08.814633 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29502061-76zjl_fe4eea03-b3c4-427a-acc9-7b73142f1723/keystone-cron/0.log" Feb 03 13:43:08 crc kubenswrapper[4820]: I0203 13:43:08.943830 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_eb6e937f-acf9-4ee8-8ee9-c757535b3a53/kube-state-metrics/0.log" Feb 03 13:43:09 crc kubenswrapper[4820]: I0203 13:43:09.233121 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6ccd68b7f-9xjs9_c5d266f2-257d-4f06-9237-b34d67b51245/keystone-api/0.log" Feb 03 13:43:09 crc kubenswrapper[4820]: I0203 13:43:09.397729 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5_772be0ab-717e-4a25-a481-95a4b1cd0c07/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:09 crc kubenswrapper[4820]: I0203 13:43:09.687028 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc_0c9770c6-0c7f-4195-99d7-a9f7074e0236/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:09 crc kubenswrapper[4820]: I0203 13:43:09.725551 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7f9964d55c-h2clw_aef62020-c58e-4de0-b1b3-10fdd2b8dc8d/neutron-api/0.log" Feb 03 13:43:09 crc kubenswrapper[4820]: I0203 13:43:09.748511 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7f9964d55c-h2clw_aef62020-c58e-4de0-b1b3-10fdd2b8dc8d/neutron-httpd/0.log" Feb 03 13:43:10 crc kubenswrapper[4820]: I0203 13:43:10.619539 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d1bc719a-a75c-4bf1-aaae-0e89d1ed34db/nova-cell0-conductor-conductor/0.log" Feb 03 13:43:10 crc kubenswrapper[4820]: I0203 13:43:10.979125 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c362e3ce-ca7f-443e-ab57-57f34e89e883/nova-cell1-conductor-conductor/0.log" Feb 03 13:43:11 crc kubenswrapper[4820]: I0203 13:43:11.247243 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_33bbf307-c8f9-402f-9b83-50d9d9b034c2/nova-cell1-novncproxy-novncproxy/0.log" Feb 03 13:43:11 crc kubenswrapper[4820]: I0203 13:43:11.312049 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_26398afc-04a6-4c1f-92bf-767a938debad/nova-api-log/0.log" Feb 03 13:43:11 crc kubenswrapper[4820]: I0203 13:43:11.586687 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-6pwfl_b390260e-6a1b-4020-95d5-c4275e4a6c4e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:11 crc kubenswrapper[4820]: I0203 13:43:11.641255 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b2a1328f-2e2d-47e6-b07c-d0b70643e1aa/nova-metadata-log/0.log" Feb 03 13:43:11 crc kubenswrapper[4820]: I0203 13:43:11.762815 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_26398afc-04a6-4c1f-92bf-767a938debad/nova-api-api/0.log" Feb 03 13:43:12 crc kubenswrapper[4820]: I0203 13:43:12.071189 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1e865214-494f-4a49-a2e6-2b7316f30a92/mysql-bootstrap/0.log" Feb 03 13:43:12 crc kubenswrapper[4820]: I0203 13:43:12.145519 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_dff15ab3-eace-455f-b413-0acd29aa3cb5/nova-scheduler-scheduler/0.log" Feb 03 13:43:12 crc kubenswrapper[4820]: I0203 13:43:12.278034 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1e865214-494f-4a49-a2e6-2b7316f30a92/mysql-bootstrap/0.log" Feb 03 13:43:12 crc kubenswrapper[4820]: I0203 13:43:12.291836 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1e865214-494f-4a49-a2e6-2b7316f30a92/galera/0.log" Feb 03 13:43:12 crc kubenswrapper[4820]: I0203 13:43:12.556554 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8e46f8a-5de0-457f-b8eb-f76e8902e8ab/mysql-bootstrap/0.log" Feb 03 13:43:12 crc kubenswrapper[4820]: I0203 13:43:12.704577 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8e46f8a-5de0-457f-b8eb-f76e8902e8ab/galera/0.log" Feb 03 13:43:12 crc kubenswrapper[4820]: I0203 13:43:12.725854 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8e46f8a-5de0-457f-b8eb-f76e8902e8ab/mysql-bootstrap/0.log" Feb 03 13:43:13 crc kubenswrapper[4820]: I0203 13:43:13.187504 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e/openstackclient/0.log" Feb 03 13:43:13 crc kubenswrapper[4820]: I0203 13:43:13.273770 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-96p5d_b3b01895-53e1-4391-8d1e-8f2458d4f2e0/ovn-controller/0.log" Feb 03 13:43:13 crc kubenswrapper[4820]: I0203 13:43:13.900677 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lrcd2_1a16d012-2c9a-452a-9a18-8d016793a7f6/openstack-network-exporter/0.log" Feb 03 13:43:14 crc kubenswrapper[4820]: I0203 13:43:14.025802 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b2a1328f-2e2d-47e6-b07c-d0b70643e1aa/nova-metadata-metadata/0.log" Feb 03 13:43:14 crc kubenswrapper[4820]: I0203 13:43:14.107266 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kk5zn_7fd50209-6464-4ba1-a7f9-ff9a38317ff2/ovsdb-server-init/0.log" Feb 03 13:43:14 crc kubenswrapper[4820]: I0203 13:43:14.320812 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kk5zn_7fd50209-6464-4ba1-a7f9-ff9a38317ff2/ovsdb-server/0.log" Feb 03 13:43:14 crc kubenswrapper[4820]: I0203 13:43:14.337075 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kk5zn_7fd50209-6464-4ba1-a7f9-ff9a38317ff2/ovs-vswitchd/0.log" Feb 03 13:43:14 crc kubenswrapper[4820]: I0203 13:43:14.346699 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kk5zn_7fd50209-6464-4ba1-a7f9-ff9a38317ff2/ovsdb-server-init/0.log" Feb 03 13:43:14 crc kubenswrapper[4820]: I0203 13:43:14.523979 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d248d6d6-d6ff-415a-9ea6-d65cde5ad964/openstack-network-exporter/0.log" Feb 03 13:43:14 crc kubenswrapper[4820]: I0203 13:43:14.629175 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-hx7dn_ffae89cd-1189-4722-8b80-6bf2a67f5dde/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:14 crc kubenswrapper[4820]: I0203 13:43:14.673416 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d248d6d6-d6ff-415a-9ea6-d65cde5ad964/ovn-northd/0.log" Feb 03 13:43:14 crc kubenswrapper[4820]: I0203 13:43:14.844663 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f936af63-a86d-4dc6-aa17-59e2e2b69f5b/openstack-network-exporter/0.log" Feb 03 13:43:14 crc kubenswrapper[4820]: I0203 13:43:14.858244 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f936af63-a86d-4dc6-aa17-59e2e2b69f5b/ovsdbserver-nb/0.log" Feb 03 13:43:15 crc kubenswrapper[4820]: I0203 13:43:15.025107 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9c7327b-374e-4a6f-a5c7-23136aea36b8/openstack-network-exporter/0.log" Feb 03 13:43:15 crc kubenswrapper[4820]: I0203 13:43:15.105292 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9c7327b-374e-4a6f-a5c7-23136aea36b8/ovsdbserver-sb/0.log" Feb 03 13:43:15 crc kubenswrapper[4820]: I0203 13:43:15.562140 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-656b464f74-h7xjt_43ecc5a4-8bd1-435c-8514-de23a493ee45/placement-api/0.log" Feb 03 13:43:16 crc kubenswrapper[4820]: I0203 13:43:16.202866 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f6a9118b-1d0e-4baf-92ca-c4024a45dd2e/init-config-reloader/0.log" Feb 03 13:43:16 crc kubenswrapper[4820]: I0203 13:43:16.263266 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-656b464f74-h7xjt_43ecc5a4-8bd1-435c-8514-de23a493ee45/placement-log/0.log" Feb 03 13:43:16 crc kubenswrapper[4820]: I0203 13:43:16.443583 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f6a9118b-1d0e-4baf-92ca-c4024a45dd2e/init-config-reloader/0.log" Feb 03 13:43:16 crc kubenswrapper[4820]: I0203 13:43:16.522470 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f6a9118b-1d0e-4baf-92ca-c4024a45dd2e/config-reloader/0.log" Feb 03 13:43:16 crc kubenswrapper[4820]: I0203 13:43:16.568291 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f6a9118b-1d0e-4baf-92ca-c4024a45dd2e/prometheus/0.log" Feb 03 13:43:16 crc kubenswrapper[4820]: I0203 13:43:16.670283 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f6a9118b-1d0e-4baf-92ca-c4024a45dd2e/thanos-sidecar/0.log" Feb 03 13:43:16 crc kubenswrapper[4820]: I0203 13:43:16.782771 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c2cfe24f-4614-4f48-867c-722af03baad7/setup-container/0.log" Feb 03 13:43:17 crc kubenswrapper[4820]: I0203 13:43:17.053332 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c2cfe24f-4614-4f48-867c-722af03baad7/setup-container/0.log" Feb 03 13:43:17 crc kubenswrapper[4820]: I0203 13:43:17.091111 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c2cfe24f-4614-4f48-867c-722af03baad7/rabbitmq/0.log" Feb 03 13:43:17 crc kubenswrapper[4820]: I0203 13:43:17.106592 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ed109a9d-a703-4fa2-b7b3-0b96760d52b1/setup-container/0.log" Feb 03 13:43:17 crc kubenswrapper[4820]: I0203 13:43:17.317831 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ed109a9d-a703-4fa2-b7b3-0b96760d52b1/rabbitmq/0.log" Feb 03 13:43:17 crc kubenswrapper[4820]: I0203 13:43:17.321623 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr_02202494-64ad-452c-ad31-b76746e7e746/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:17 crc kubenswrapper[4820]: I0203 13:43:17.335677 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ed109a9d-a703-4fa2-b7b3-0b96760d52b1/setup-container/0.log" Feb 03 13:43:17 crc kubenswrapper[4820]: I0203 13:43:17.570852 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-5gdkr_a7717d9c-63f8-493f-be01-0fdea46ef053/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:17 crc kubenswrapper[4820]: I0203 13:43:17.619373 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4_d8d69bce-1404-4fce-ab56-a8d4c9f46b28/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:18 crc kubenswrapper[4820]: I0203 13:43:18.046153 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-dzskl_fe0dcc37-428f-4efa-a725-e4361affcacd/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:18 crc kubenswrapper[4820]: I0203 13:43:18.143177 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:43:18 crc kubenswrapper[4820]: E0203 13:43:18.143618 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:43:18 crc kubenswrapper[4820]: I0203 13:43:18.145859 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-g9h74_dc7a208f-6c45-4374-ace1-70b2e16c499c/ssh-known-hosts-edpm-deployment/0.log" Feb 03 13:43:18 crc kubenswrapper[4820]: I0203 13:43:18.382378 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-646ccfdf87-kdlkr_e530e04a-6fa7-4cc2-be2a-46a26eec64a5/proxy-server/0.log" Feb 03 13:43:18 crc kubenswrapper[4820]: I0203 13:43:18.585881 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-pslmr_94423319-f57f-47dd-80db-db41374dcb25/swift-ring-rebalance/0.log" Feb 03 13:43:18 crc kubenswrapper[4820]: I0203 13:43:18.625976 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-646ccfdf87-kdlkr_e530e04a-6fa7-4cc2-be2a-46a26eec64a5/proxy-httpd/0.log" Feb 03 13:43:18 crc kubenswrapper[4820]: I0203 13:43:18.741713 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/account-auditor/0.log" Feb 03 13:43:18 crc kubenswrapper[4820]: I0203 13:43:18.857191 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/account-reaper/0.log" Feb 03 13:43:18 crc kubenswrapper[4820]: I0203 13:43:18.883266 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/account-replicator/0.log" Feb 03 13:43:18 crc kubenswrapper[4820]: I0203 13:43:18.970935 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/account-server/0.log" Feb 03 13:43:18 crc kubenswrapper[4820]: I0203 13:43:18.989874 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/container-auditor/0.log" Feb 03 13:43:19 crc kubenswrapper[4820]: I0203 13:43:19.120220 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/container-server/0.log" Feb 03 13:43:19 crc kubenswrapper[4820]: I0203 13:43:19.193395 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/container-updater/0.log" Feb 03 13:43:19 crc kubenswrapper[4820]: I0203 13:43:19.207106 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/container-replicator/0.log" Feb 03 13:43:19 crc kubenswrapper[4820]: I0203 13:43:19.255015 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/object-auditor/0.log" Feb 03 13:43:19 crc kubenswrapper[4820]: I0203 13:43:19.344559 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/object-expirer/0.log" Feb 03 13:43:19 crc kubenswrapper[4820]: I0203 13:43:19.425060 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/object-replicator/0.log" Feb 03 13:43:19 crc kubenswrapper[4820]: I0203 13:43:19.464186 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/object-server/0.log" Feb 03 13:43:19 crc kubenswrapper[4820]: I0203 13:43:19.486007 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/object-updater/0.log" Feb 03 13:43:19 crc kubenswrapper[4820]: I0203 13:43:19.592880 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/rsync/0.log" Feb 03 13:43:19 crc kubenswrapper[4820]: I0203 13:43:19.634095 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/swift-recon-cron/0.log" Feb 03 13:43:19 crc kubenswrapper[4820]: I0203 13:43:19.809526 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-g98lk_9dba6be1-f601-4959-8c1f-791b7fb032b8/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:19 crc kubenswrapper[4820]: I0203 13:43:19.836785 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a52d7dcc-1107-47d1-b270-0601e9dc2b1b/tempest-tests-tempest-tests-runner/0.log" Feb 03 13:43:20 crc kubenswrapper[4820]: I0203 13:43:20.013442 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9f3d9ead-7790-4cbb-a70c-51aa29d87eef/test-operator-logs-container/0.log" Feb 03 13:43:20 crc kubenswrapper[4820]: I0203 13:43:20.065954 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x_ee96f9e1-369f-4e88-9766-419a9a05abe5/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:43:20 crc kubenswrapper[4820]: I0203 13:43:20.542328 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ace9a08e-e106-4d85-ae21-3d7d6ea60dff/memcached/0.log" Feb 03 13:43:20 crc kubenswrapper[4820]: I0203 13:43:20.993768 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_6ed16a73-0e39-4ac4-bd01-820e6a7a45b0/watcher-applier/0.log" Feb 03 13:43:21 crc kubenswrapper[4820]: I0203 13:43:21.433379 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7cd4de1e-997d-4df1-9ad5-2049937ab135/watcher-api-log/0.log" Feb 03 13:43:21 crc kubenswrapper[4820]: I0203 13:43:21.890444 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe/watcher-decision-engine/0.log" Feb 03 13:43:23 crc kubenswrapper[4820]: I0203 13:43:23.923693 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7cd4de1e-997d-4df1-9ad5-2049937ab135/watcher-api/0.log" Feb 03 13:43:33 crc kubenswrapper[4820]: I0203 13:43:33.151416 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:43:33 crc kubenswrapper[4820]: E0203 13:43:33.152354 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:43:44 crc kubenswrapper[4820]: I0203 13:43:44.142616 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:43:44 crc kubenswrapper[4820]: E0203 13:43:44.144047 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:43:52 crc kubenswrapper[4820]: I0203 13:43:52.273029 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl_7a0d0284-7ac0-4e09-ba63-1fa33dbbb574/util/0.log" Feb 03 13:43:52 crc kubenswrapper[4820]: I0203 13:43:52.460142 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl_7a0d0284-7ac0-4e09-ba63-1fa33dbbb574/util/0.log" Feb 03 13:43:52 crc kubenswrapper[4820]: I0203 13:43:52.480055 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl_7a0d0284-7ac0-4e09-ba63-1fa33dbbb574/pull/0.log" Feb 03 13:43:52 crc kubenswrapper[4820]: I0203 13:43:52.488106 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl_7a0d0284-7ac0-4e09-ba63-1fa33dbbb574/pull/0.log" Feb 03 13:43:52 crc kubenswrapper[4820]: I0203 13:43:52.654310 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl_7a0d0284-7ac0-4e09-ba63-1fa33dbbb574/extract/0.log" Feb 03 13:43:52 crc kubenswrapper[4820]: I0203 13:43:52.654679 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl_7a0d0284-7ac0-4e09-ba63-1fa33dbbb574/util/0.log" Feb 03 13:43:52 crc kubenswrapper[4820]: I0203 13:43:52.666190 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl_7a0d0284-7ac0-4e09-ba63-1fa33dbbb574/pull/0.log" Feb 03 13:43:52 crc kubenswrapper[4820]: I0203 13:43:52.932394 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-z8jk7_4d0ea57e-5eb3-4624-a299-b9a7ad6f2bb0/manager/0.log" Feb 03 13:43:52 crc kubenswrapper[4820]: I0203 13:43:52.951297 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-qnh2k_cde1eaee-12a0-47f7-b88a-b1b97d0ed74b/manager/0.log" Feb 03 13:43:53 crc kubenswrapper[4820]: I0203 13:43:53.065834 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-wsb7r_51c967b2-8f1a-4d0d-a3f9-745e72863b84/manager/0.log" Feb 03 13:43:53 crc kubenswrapper[4820]: I0203 13:43:53.224359 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-5dmwb_88eb8fcd-4721-45c2-bb00-23b1dc962283/manager/0.log" Feb 03 13:43:53 crc kubenswrapper[4820]: I0203 13:43:53.248805 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-6fw2d_7f5efd7c-09f4-42b0-ba17-7a7dc609d914/manager/0.log" Feb 03 13:43:53 crc kubenswrapper[4820]: I0203 13:43:53.387183 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-t5mj4_101ca31b-ff08-4a49-9cc1-f48fd8679116/manager/0.log" Feb 03 13:43:53 crc kubenswrapper[4820]: I0203 13:43:53.652886 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-xkj2j_29dd9257-532e-48a4-9500-adfc5584ebe0/manager/0.log" Feb 03 13:43:53 crc kubenswrapper[4820]: I0203 13:43:53.762577 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-22gr9_7ad36bba-9140-4660-b4ed-e873264c9e22/manager/0.log" Feb 03 13:43:53 crc kubenswrapper[4820]: I0203 13:43:53.932020 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-rdbrk_4ebad58b-3e3b-4bcb-9a80-dedd97e940d0/manager/0.log" Feb 03 13:43:53 crc kubenswrapper[4820]: I0203 13:43:53.951227 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-9rprq_614c5412-875d-40b1-ad5f-445a941285af/manager/0.log" Feb 03 13:43:54 crc kubenswrapper[4820]: I0203 13:43:54.182385 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-qnmp6_40fd2238-8148-4aa3-8f4e-54ffc1de0805/manager/0.log" Feb 03 13:43:54 crc kubenswrapper[4820]: I0203 13:43:54.433544 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-4tmqm_81450158-204d-45f5-a1bc-de63e889445d/manager/0.log" Feb 03 13:43:54 crc kubenswrapper[4820]: I0203 13:43:54.637558 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-4fgnl_7a515408-dc44-4fba-bbe9-8b5f36fbc1d0/manager/0.log" Feb 03 13:43:54 crc kubenswrapper[4820]: I0203 13:43:54.686283 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-brrn4_56b6c45e-8879-4a30-8b7c-d6c7df8ac6ae/manager/0.log" Feb 03 13:43:54 crc kubenswrapper[4820]: I0203 13:43:54.755305 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9_591b67aa-03c7-4cf7-8918-17e2f7a428b0/manager/0.log" Feb 03 13:43:54 crc kubenswrapper[4820]: I0203 13:43:54.959863 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-8c5c9674b-tdfgs_a781fb7c-cb52-4076-aa3c-5792d8ab7e42/operator/0.log" Feb 03 13:43:55 crc kubenswrapper[4820]: I0203 13:43:55.281465 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bpd2f_5432ffce-4da9-4a8a-9738-4e9dd0ee9a6a/registry-server/0.log" Feb 03 13:43:55 crc kubenswrapper[4820]: I0203 13:43:55.457931 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-5lxrd_b12dfc88-bdbd-4874-b397-9273a669e57f/manager/0.log" Feb 03 13:43:55 crc kubenswrapper[4820]: I0203 13:43:55.593505 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-x567r_851ed64f-f147-45d0-a33b-eea29903ec0a/manager/0.log" Feb 03 13:43:55 crc kubenswrapper[4820]: I0203 13:43:55.818355 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8l49q_8560a157-03d5-4135-a5e1-32acc68b6e4e/operator/0.log" Feb 03 13:43:56 crc kubenswrapper[4820]: I0203 13:43:56.062985 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-dr7hd_21f3efdd-0c83-42cb-8b54-b0554534bfb7/manager/0.log" Feb 03 13:43:56 crc kubenswrapper[4820]: I0203 13:43:56.212245 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-855575688d-cl9c5_ffe7d059-602c-4fbc-bd5e-4c092cc6f3db/manager/0.log" Feb 03 13:43:56 crc kubenswrapper[4820]: I0203 13:43:56.406152 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-xw4mq_96838cc3-1b9b-41b3-b20e-476319c65436/manager/0.log" Feb 03 13:43:56 crc kubenswrapper[4820]: I0203 13:43:56.409551 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-22hg8_1058185d-f11d-4a87-9fe6-005f60186329/manager/0.log" Feb 03 13:43:56 crc kubenswrapper[4820]: I0203 13:43:56.895377 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d49495bcf-pflss_18a84695-492b-42ae-9d72-6e582316ce55/manager/0.log" Feb 03 13:43:59 crc kubenswrapper[4820]: I0203 13:43:59.244666 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:43:59 crc kubenswrapper[4820]: E0203 13:43:59.245923 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:44:12 crc kubenswrapper[4820]: I0203 13:44:12.144188 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:44:12 crc kubenswrapper[4820]: E0203 13:44:12.145024 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:44:21 crc kubenswrapper[4820]: I0203 13:44:21.724874 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gqwld_8237a118-001c-483c-8810-d051f33d35eb/control-plane-machine-set-operator/0.log" Feb 03 13:44:21 crc kubenswrapper[4820]: I0203 13:44:21.934382 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hxjbf_6b522a8e-f795-4cf1-adbb-899674a5e359/kube-rbac-proxy/0.log" Feb 03 13:44:21 crc kubenswrapper[4820]: I0203 13:44:21.955299 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hxjbf_6b522a8e-f795-4cf1-adbb-899674a5e359/machine-api-operator/0.log" Feb 03 13:44:25 crc kubenswrapper[4820]: I0203 13:44:25.147255 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:44:25 crc kubenswrapper[4820]: E0203 13:44:25.147981 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:44:37 crc kubenswrapper[4820]: I0203 13:44:37.143291 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:44:37 crc kubenswrapper[4820]: E0203 13:44:37.144507 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:44:37 crc kubenswrapper[4820]: I0203 13:44:37.481483 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-vdqjl_3853758e-3847-4715-8b8a-85022e708c75/cert-manager-cainjector/0.log" Feb 03 13:44:37 crc kubenswrapper[4820]: I0203 13:44:37.514779 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-29s2s_e98f8274-774b-446d-ae13-e7e7d4697463/cert-manager-controller/0.log" Feb 03 13:44:37 crc kubenswrapper[4820]: I0203 13:44:37.651529 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-lb2pj_f84b05bb-fe6d-4dcb-9501-375683557250/cert-manager-webhook/0.log" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.011530 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wzrgn"] Feb 03 13:44:49 crc kubenswrapper[4820]: E0203 13:44:49.012779 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca13962-a392-46c0-b65d-22f0bb9abb3c" containerName="container-00" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.012818 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca13962-a392-46c0-b65d-22f0bb9abb3c" containerName="container-00" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.013096 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ca13962-a392-46c0-b65d-22f0bb9abb3c" containerName="container-00" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.015255 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.043541 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wzrgn"] Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.198386 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5902d12-ec3b-456b-abb8-0155be3e1619-catalog-content\") pod \"redhat-operators-wzrgn\" (UID: \"f5902d12-ec3b-456b-abb8-0155be3e1619\") " pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.199053 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd25w\" (UniqueName: \"kubernetes.io/projected/f5902d12-ec3b-456b-abb8-0155be3e1619-kube-api-access-jd25w\") pod \"redhat-operators-wzrgn\" (UID: \"f5902d12-ec3b-456b-abb8-0155be3e1619\") " pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.199637 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5902d12-ec3b-456b-abb8-0155be3e1619-utilities\") pod \"redhat-operators-wzrgn\" (UID: \"f5902d12-ec3b-456b-abb8-0155be3e1619\") " pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.302069 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5902d12-ec3b-456b-abb8-0155be3e1619-utilities\") pod \"redhat-operators-wzrgn\" (UID: \"f5902d12-ec3b-456b-abb8-0155be3e1619\") " pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.302165 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5902d12-ec3b-456b-abb8-0155be3e1619-catalog-content\") pod \"redhat-operators-wzrgn\" (UID: \"f5902d12-ec3b-456b-abb8-0155be3e1619\") " pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.302246 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd25w\" (UniqueName: \"kubernetes.io/projected/f5902d12-ec3b-456b-abb8-0155be3e1619-kube-api-access-jd25w\") pod \"redhat-operators-wzrgn\" (UID: \"f5902d12-ec3b-456b-abb8-0155be3e1619\") " pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.302712 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5902d12-ec3b-456b-abb8-0155be3e1619-catalog-content\") pod \"redhat-operators-wzrgn\" (UID: \"f5902d12-ec3b-456b-abb8-0155be3e1619\") " pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.303123 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5902d12-ec3b-456b-abb8-0155be3e1619-utilities\") pod \"redhat-operators-wzrgn\" (UID: \"f5902d12-ec3b-456b-abb8-0155be3e1619\") " pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.323021 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd25w\" (UniqueName: \"kubernetes.io/projected/f5902d12-ec3b-456b-abb8-0155be3e1619-kube-api-access-jd25w\") pod \"redhat-operators-wzrgn\" (UID: \"f5902d12-ec3b-456b-abb8-0155be3e1619\") " pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.369338 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:44:49 crc kubenswrapper[4820]: I0203 13:44:49.996912 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wzrgn"] Feb 03 13:44:50 crc kubenswrapper[4820]: I0203 13:44:50.410037 4820 generic.go:334] "Generic (PLEG): container finished" podID="f5902d12-ec3b-456b-abb8-0155be3e1619" containerID="79c35a7887fe7a6d2fc33b6795e6dd258509ff630fcad1a6a7868b4e87a89580" exitCode=0 Feb 03 13:44:50 crc kubenswrapper[4820]: I0203 13:44:50.410346 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzrgn" event={"ID":"f5902d12-ec3b-456b-abb8-0155be3e1619","Type":"ContainerDied","Data":"79c35a7887fe7a6d2fc33b6795e6dd258509ff630fcad1a6a7868b4e87a89580"} Feb 03 13:44:50 crc kubenswrapper[4820]: I0203 13:44:50.410375 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzrgn" event={"ID":"f5902d12-ec3b-456b-abb8-0155be3e1619","Type":"ContainerStarted","Data":"e536bf7ceb39dcd87bbca62394f4da128ae61729dc3ba14216ea830f53ec41ce"} Feb 03 13:44:50 crc kubenswrapper[4820]: I0203 13:44:50.412554 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:44:51 crc kubenswrapper[4820]: I0203 13:44:51.143591 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:44:51 crc kubenswrapper[4820]: E0203 13:44:51.144221 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:44:52 crc kubenswrapper[4820]: I0203 13:44:52.438202 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzrgn" event={"ID":"f5902d12-ec3b-456b-abb8-0155be3e1619","Type":"ContainerStarted","Data":"bad666814c4f97b3f37ebe9d843be7a8ead4666728d79bf46f66eda80edf9187"} Feb 03 13:44:52 crc kubenswrapper[4820]: I0203 13:44:52.550842 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-hcd62_3f652654-b0e0-47f3-b1db-9930c6b681c6/nmstate-console-plugin/0.log" Feb 03 13:44:52 crc kubenswrapper[4820]: I0203 13:44:52.996321 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sbsh5_afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3/nmstate-handler/0.log" Feb 03 13:44:53 crc kubenswrapper[4820]: I0203 13:44:53.039361 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vprcr_25a587ed-7ff6-4ffd-b2ad-5a88a81c7867/kube-rbac-proxy/0.log" Feb 03 13:44:53 crc kubenswrapper[4820]: I0203 13:44:53.138491 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vprcr_25a587ed-7ff6-4ffd-b2ad-5a88a81c7867/nmstate-metrics/0.log" Feb 03 13:44:53 crc kubenswrapper[4820]: I0203 13:44:53.239308 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-gcnrh_3cc69a01-8e9a-4d98-9568-841c499eb0f0/nmstate-operator/0.log" Feb 03 13:44:53 crc kubenswrapper[4820]: I0203 13:44:53.345166 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-2tnxr_23a0cc00-e454-4afc-82bb-0d79c0b76324/nmstate-webhook/0.log" Feb 03 13:44:57 crc kubenswrapper[4820]: I0203 13:44:57.572848 4820 generic.go:334] "Generic (PLEG): container finished" podID="f5902d12-ec3b-456b-abb8-0155be3e1619" containerID="bad666814c4f97b3f37ebe9d843be7a8ead4666728d79bf46f66eda80edf9187" exitCode=0 Feb 03 13:44:57 crc kubenswrapper[4820]: I0203 13:44:57.572934 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzrgn" event={"ID":"f5902d12-ec3b-456b-abb8-0155be3e1619","Type":"ContainerDied","Data":"bad666814c4f97b3f37ebe9d843be7a8ead4666728d79bf46f66eda80edf9187"} Feb 03 13:44:59 crc kubenswrapper[4820]: I0203 13:44:59.612800 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzrgn" event={"ID":"f5902d12-ec3b-456b-abb8-0155be3e1619","Type":"ContainerStarted","Data":"0e1d176c78cb22ac5aa2925cf0d040d1b86dcda301d978cba6073227894a9994"} Feb 03 13:44:59 crc kubenswrapper[4820]: I0203 13:44:59.636950 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wzrgn" podStartSLOduration=3.531084878 podStartE2EDuration="11.636918948s" podCreationTimestamp="2026-02-03 13:44:48 +0000 UTC" firstStartedPulling="2026-02-03 13:44:50.412248788 +0000 UTC m=+6007.935324652" lastFinishedPulling="2026-02-03 13:44:58.518082858 +0000 UTC m=+6016.041158722" observedRunningTime="2026-02-03 13:44:59.633859786 +0000 UTC m=+6017.156935660" watchObservedRunningTime="2026-02-03 13:44:59.636918948 +0000 UTC m=+6017.159994812" Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.162976 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx"] Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.164849 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.167522 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.169048 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.171937 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx"] Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.269955 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5nv9\" (UniqueName: \"kubernetes.io/projected/85f5570a-59ee-439b-92a8-18730d2edfc5-kube-api-access-v5nv9\") pod \"collect-profiles-29502105-cmtcx\" (UID: \"85f5570a-59ee-439b-92a8-18730d2edfc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.270154 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85f5570a-59ee-439b-92a8-18730d2edfc5-secret-volume\") pod \"collect-profiles-29502105-cmtcx\" (UID: \"85f5570a-59ee-439b-92a8-18730d2edfc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.270541 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85f5570a-59ee-439b-92a8-18730d2edfc5-config-volume\") pod \"collect-profiles-29502105-cmtcx\" (UID: \"85f5570a-59ee-439b-92a8-18730d2edfc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.372566 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5nv9\" (UniqueName: \"kubernetes.io/projected/85f5570a-59ee-439b-92a8-18730d2edfc5-kube-api-access-v5nv9\") pod \"collect-profiles-29502105-cmtcx\" (UID: \"85f5570a-59ee-439b-92a8-18730d2edfc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.372683 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85f5570a-59ee-439b-92a8-18730d2edfc5-secret-volume\") pod \"collect-profiles-29502105-cmtcx\" (UID: \"85f5570a-59ee-439b-92a8-18730d2edfc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.372742 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85f5570a-59ee-439b-92a8-18730d2edfc5-config-volume\") pod \"collect-profiles-29502105-cmtcx\" (UID: \"85f5570a-59ee-439b-92a8-18730d2edfc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.373741 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85f5570a-59ee-439b-92a8-18730d2edfc5-config-volume\") pod \"collect-profiles-29502105-cmtcx\" (UID: \"85f5570a-59ee-439b-92a8-18730d2edfc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.378806 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85f5570a-59ee-439b-92a8-18730d2edfc5-secret-volume\") pod \"collect-profiles-29502105-cmtcx\" (UID: \"85f5570a-59ee-439b-92a8-18730d2edfc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.397622 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5nv9\" (UniqueName: \"kubernetes.io/projected/85f5570a-59ee-439b-92a8-18730d2edfc5-kube-api-access-v5nv9\") pod \"collect-profiles-29502105-cmtcx\" (UID: \"85f5570a-59ee-439b-92a8-18730d2edfc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" Feb 03 13:45:00 crc kubenswrapper[4820]: I0203 13:45:00.718271 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" Feb 03 13:45:01 crc kubenswrapper[4820]: I0203 13:45:01.299910 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx"] Feb 03 13:45:01 crc kubenswrapper[4820]: W0203 13:45:01.303071 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85f5570a_59ee_439b_92a8_18730d2edfc5.slice/crio-5aefe136bbb448c7f9bf59748c52e5bcc1391ef2d3125af3fc404c6d2d55da9d WatchSource:0}: Error finding container 5aefe136bbb448c7f9bf59748c52e5bcc1391ef2d3125af3fc404c6d2d55da9d: Status 404 returned error can't find the container with id 5aefe136bbb448c7f9bf59748c52e5bcc1391ef2d3125af3fc404c6d2d55da9d Feb 03 13:45:01 crc kubenswrapper[4820]: I0203 13:45:01.720586 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" event={"ID":"85f5570a-59ee-439b-92a8-18730d2edfc5","Type":"ContainerStarted","Data":"49e3ae4151cee1cf91b0029178b891b05556527db0c5d67cb0fcc4748226fcce"} Feb 03 13:45:01 crc kubenswrapper[4820]: I0203 13:45:01.720997 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" event={"ID":"85f5570a-59ee-439b-92a8-18730d2edfc5","Type":"ContainerStarted","Data":"5aefe136bbb448c7f9bf59748c52e5bcc1391ef2d3125af3fc404c6d2d55da9d"} Feb 03 13:45:01 crc kubenswrapper[4820]: I0203 13:45:01.743759 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" podStartSLOduration=1.743730727 podStartE2EDuration="1.743730727s" podCreationTimestamp="2026-02-03 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:45:01.738813326 +0000 UTC m=+6019.261889190" watchObservedRunningTime="2026-02-03 13:45:01.743730727 +0000 UTC m=+6019.266806611" Feb 03 13:45:02 crc kubenswrapper[4820]: I0203 13:45:02.741587 4820 generic.go:334] "Generic (PLEG): container finished" podID="85f5570a-59ee-439b-92a8-18730d2edfc5" containerID="49e3ae4151cee1cf91b0029178b891b05556527db0c5d67cb0fcc4748226fcce" exitCode=0 Feb 03 13:45:02 crc kubenswrapper[4820]: I0203 13:45:02.741766 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" event={"ID":"85f5570a-59ee-439b-92a8-18730d2edfc5","Type":"ContainerDied","Data":"49e3ae4151cee1cf91b0029178b891b05556527db0c5d67cb0fcc4748226fcce"} Feb 03 13:45:04 crc kubenswrapper[4820]: I0203 13:45:04.548638 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" Feb 03 13:45:04 crc kubenswrapper[4820]: I0203 13:45:04.600970 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85f5570a-59ee-439b-92a8-18730d2edfc5-secret-volume\") pod \"85f5570a-59ee-439b-92a8-18730d2edfc5\" (UID: \"85f5570a-59ee-439b-92a8-18730d2edfc5\") " Feb 03 13:45:04 crc kubenswrapper[4820]: I0203 13:45:04.601231 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85f5570a-59ee-439b-92a8-18730d2edfc5-config-volume\") pod \"85f5570a-59ee-439b-92a8-18730d2edfc5\" (UID: \"85f5570a-59ee-439b-92a8-18730d2edfc5\") " Feb 03 13:45:04 crc kubenswrapper[4820]: I0203 13:45:04.601327 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5nv9\" (UniqueName: \"kubernetes.io/projected/85f5570a-59ee-439b-92a8-18730d2edfc5-kube-api-access-v5nv9\") pod \"85f5570a-59ee-439b-92a8-18730d2edfc5\" (UID: \"85f5570a-59ee-439b-92a8-18730d2edfc5\") " Feb 03 13:45:04 crc kubenswrapper[4820]: I0203 13:45:04.602247 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85f5570a-59ee-439b-92a8-18730d2edfc5-config-volume" (OuterVolumeSpecName: "config-volume") pod "85f5570a-59ee-439b-92a8-18730d2edfc5" (UID: "85f5570a-59ee-439b-92a8-18730d2edfc5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:45:04 crc kubenswrapper[4820]: I0203 13:45:04.606681 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f5570a-59ee-439b-92a8-18730d2edfc5-kube-api-access-v5nv9" (OuterVolumeSpecName: "kube-api-access-v5nv9") pod "85f5570a-59ee-439b-92a8-18730d2edfc5" (UID: "85f5570a-59ee-439b-92a8-18730d2edfc5"). InnerVolumeSpecName "kube-api-access-v5nv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:45:04 crc kubenswrapper[4820]: I0203 13:45:04.613051 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f5570a-59ee-439b-92a8-18730d2edfc5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "85f5570a-59ee-439b-92a8-18730d2edfc5" (UID: "85f5570a-59ee-439b-92a8-18730d2edfc5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:45:04 crc kubenswrapper[4820]: I0203 13:45:04.704046 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85f5570a-59ee-439b-92a8-18730d2edfc5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:45:04 crc kubenswrapper[4820]: I0203 13:45:04.704096 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5nv9\" (UniqueName: \"kubernetes.io/projected/85f5570a-59ee-439b-92a8-18730d2edfc5-kube-api-access-v5nv9\") on node \"crc\" DevicePath \"\"" Feb 03 13:45:04 crc kubenswrapper[4820]: I0203 13:45:04.704111 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85f5570a-59ee-439b-92a8-18730d2edfc5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:45:04 crc kubenswrapper[4820]: I0203 13:45:04.762966 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" event={"ID":"85f5570a-59ee-439b-92a8-18730d2edfc5","Type":"ContainerDied","Data":"5aefe136bbb448c7f9bf59748c52e5bcc1391ef2d3125af3fc404c6d2d55da9d"} Feb 03 13:45:04 crc kubenswrapper[4820]: I0203 13:45:04.763009 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aefe136bbb448c7f9bf59748c52e5bcc1391ef2d3125af3fc404c6d2d55da9d" Feb 03 13:45:04 crc kubenswrapper[4820]: I0203 13:45:04.763027 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-cmtcx" Feb 03 13:45:05 crc kubenswrapper[4820]: I0203 13:45:05.662136 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q"] Feb 03 13:45:05 crc kubenswrapper[4820]: I0203 13:45:05.670951 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502060-4cv5q"] Feb 03 13:45:06 crc kubenswrapper[4820]: I0203 13:45:06.142346 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:45:06 crc kubenswrapper[4820]: E0203 13:45:06.142718 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:45:07 crc kubenswrapper[4820]: I0203 13:45:07.162748 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21f14082-d158-4532-810b-ac2fa83e4455" path="/var/lib/kubelet/pods/21f14082-d158-4532-810b-ac2fa83e4455/volumes" Feb 03 13:45:09 crc kubenswrapper[4820]: I0203 13:45:09.370518 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:45:09 crc kubenswrapper[4820]: I0203 13:45:09.370873 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:45:09 crc kubenswrapper[4820]: I0203 13:45:09.428541 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:45:09 crc kubenswrapper[4820]: I0203 13:45:09.868068 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:45:09 crc kubenswrapper[4820]: I0203 13:45:09.930828 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wzrgn"] Feb 03 13:45:11 crc kubenswrapper[4820]: I0203 13:45:11.668779 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9jzgf_c1ad6c2d-5ab9-4904-9426-00ebf486a90d/prometheus-operator/0.log" Feb 03 13:45:11 crc kubenswrapper[4820]: I0203 13:45:11.832770 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wzrgn" podUID="f5902d12-ec3b-456b-abb8-0155be3e1619" containerName="registry-server" containerID="cri-o://0e1d176c78cb22ac5aa2925cf0d040d1b86dcda301d978cba6073227894a9994" gracePeriod=2 Feb 03 13:45:11 crc kubenswrapper[4820]: I0203 13:45:11.833264 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv_3202dd82-6cc2-478c-9eb1-7810a23ce4bb/prometheus-operator-admission-webhook/0.log" Feb 03 13:45:11 crc kubenswrapper[4820]: I0203 13:45:11.834425 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m_67c9fe0e-5cc6-469b-90a0-11adfac994cc/prometheus-operator-admission-webhook/0.log" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.128830 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-lshn6_c22a4473-b3ac-4b33-9a20-320b76c330ab/operator/0.log" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.225976 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-gx6fv_4f0df377-6a2b-4270-974f-3d178cdc47d9/perses-operator/0.log" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.252426 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.441843 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5902d12-ec3b-456b-abb8-0155be3e1619-utilities\") pod \"f5902d12-ec3b-456b-abb8-0155be3e1619\" (UID: \"f5902d12-ec3b-456b-abb8-0155be3e1619\") " Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.441931 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd25w\" (UniqueName: \"kubernetes.io/projected/f5902d12-ec3b-456b-abb8-0155be3e1619-kube-api-access-jd25w\") pod \"f5902d12-ec3b-456b-abb8-0155be3e1619\" (UID: \"f5902d12-ec3b-456b-abb8-0155be3e1619\") " Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.441974 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5902d12-ec3b-456b-abb8-0155be3e1619-catalog-content\") pod \"f5902d12-ec3b-456b-abb8-0155be3e1619\" (UID: \"f5902d12-ec3b-456b-abb8-0155be3e1619\") " Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.442564 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5902d12-ec3b-456b-abb8-0155be3e1619-utilities" (OuterVolumeSpecName: "utilities") pod "f5902d12-ec3b-456b-abb8-0155be3e1619" (UID: "f5902d12-ec3b-456b-abb8-0155be3e1619"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.442913 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5902d12-ec3b-456b-abb8-0155be3e1619-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.448997 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5902d12-ec3b-456b-abb8-0155be3e1619-kube-api-access-jd25w" (OuterVolumeSpecName: "kube-api-access-jd25w") pod "f5902d12-ec3b-456b-abb8-0155be3e1619" (UID: "f5902d12-ec3b-456b-abb8-0155be3e1619"). InnerVolumeSpecName "kube-api-access-jd25w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.544966 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd25w\" (UniqueName: \"kubernetes.io/projected/f5902d12-ec3b-456b-abb8-0155be3e1619-kube-api-access-jd25w\") on node \"crc\" DevicePath \"\"" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.575914 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5902d12-ec3b-456b-abb8-0155be3e1619-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5902d12-ec3b-456b-abb8-0155be3e1619" (UID: "f5902d12-ec3b-456b-abb8-0155be3e1619"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.648467 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5902d12-ec3b-456b-abb8-0155be3e1619-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.851224 4820 generic.go:334] "Generic (PLEG): container finished" podID="f5902d12-ec3b-456b-abb8-0155be3e1619" containerID="0e1d176c78cb22ac5aa2925cf0d040d1b86dcda301d978cba6073227894a9994" exitCode=0 Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.851309 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzrgn" event={"ID":"f5902d12-ec3b-456b-abb8-0155be3e1619","Type":"ContainerDied","Data":"0e1d176c78cb22ac5aa2925cf0d040d1b86dcda301d978cba6073227894a9994"} Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.851365 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzrgn" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.851549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzrgn" event={"ID":"f5902d12-ec3b-456b-abb8-0155be3e1619","Type":"ContainerDied","Data":"e536bf7ceb39dcd87bbca62394f4da128ae61729dc3ba14216ea830f53ec41ce"} Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.851585 4820 scope.go:117] "RemoveContainer" containerID="0e1d176c78cb22ac5aa2925cf0d040d1b86dcda301d978cba6073227894a9994" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.901598 4820 scope.go:117] "RemoveContainer" containerID="bad666814c4f97b3f37ebe9d843be7a8ead4666728d79bf46f66eda80edf9187" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.904427 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wzrgn"] Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.921683 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wzrgn"] Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.934194 4820 scope.go:117] "RemoveContainer" containerID="79c35a7887fe7a6d2fc33b6795e6dd258509ff630fcad1a6a7868b4e87a89580" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.980663 4820 scope.go:117] "RemoveContainer" containerID="0e1d176c78cb22ac5aa2925cf0d040d1b86dcda301d978cba6073227894a9994" Feb 03 13:45:12 crc kubenswrapper[4820]: E0203 13:45:12.981234 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e1d176c78cb22ac5aa2925cf0d040d1b86dcda301d978cba6073227894a9994\": container with ID starting with 0e1d176c78cb22ac5aa2925cf0d040d1b86dcda301d978cba6073227894a9994 not found: ID does not exist" containerID="0e1d176c78cb22ac5aa2925cf0d040d1b86dcda301d978cba6073227894a9994" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.981286 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e1d176c78cb22ac5aa2925cf0d040d1b86dcda301d978cba6073227894a9994"} err="failed to get container status \"0e1d176c78cb22ac5aa2925cf0d040d1b86dcda301d978cba6073227894a9994\": rpc error: code = NotFound desc = could not find container \"0e1d176c78cb22ac5aa2925cf0d040d1b86dcda301d978cba6073227894a9994\": container with ID starting with 0e1d176c78cb22ac5aa2925cf0d040d1b86dcda301d978cba6073227894a9994 not found: ID does not exist" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.981319 4820 scope.go:117] "RemoveContainer" containerID="bad666814c4f97b3f37ebe9d843be7a8ead4666728d79bf46f66eda80edf9187" Feb 03 13:45:12 crc kubenswrapper[4820]: E0203 13:45:12.981781 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bad666814c4f97b3f37ebe9d843be7a8ead4666728d79bf46f66eda80edf9187\": container with ID starting with bad666814c4f97b3f37ebe9d843be7a8ead4666728d79bf46f66eda80edf9187 not found: ID does not exist" containerID="bad666814c4f97b3f37ebe9d843be7a8ead4666728d79bf46f66eda80edf9187" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.981809 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bad666814c4f97b3f37ebe9d843be7a8ead4666728d79bf46f66eda80edf9187"} err="failed to get container status \"bad666814c4f97b3f37ebe9d843be7a8ead4666728d79bf46f66eda80edf9187\": rpc error: code = NotFound desc = could not find container \"bad666814c4f97b3f37ebe9d843be7a8ead4666728d79bf46f66eda80edf9187\": container with ID starting with bad666814c4f97b3f37ebe9d843be7a8ead4666728d79bf46f66eda80edf9187 not found: ID does not exist" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.981833 4820 scope.go:117] "RemoveContainer" containerID="79c35a7887fe7a6d2fc33b6795e6dd258509ff630fcad1a6a7868b4e87a89580" Feb 03 13:45:12 crc kubenswrapper[4820]: E0203 13:45:12.982227 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79c35a7887fe7a6d2fc33b6795e6dd258509ff630fcad1a6a7868b4e87a89580\": container with ID starting with 79c35a7887fe7a6d2fc33b6795e6dd258509ff630fcad1a6a7868b4e87a89580 not found: ID does not exist" containerID="79c35a7887fe7a6d2fc33b6795e6dd258509ff630fcad1a6a7868b4e87a89580" Feb 03 13:45:12 crc kubenswrapper[4820]: I0203 13:45:12.982256 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79c35a7887fe7a6d2fc33b6795e6dd258509ff630fcad1a6a7868b4e87a89580"} err="failed to get container status \"79c35a7887fe7a6d2fc33b6795e6dd258509ff630fcad1a6a7868b4e87a89580\": rpc error: code = NotFound desc = could not find container \"79c35a7887fe7a6d2fc33b6795e6dd258509ff630fcad1a6a7868b4e87a89580\": container with ID starting with 79c35a7887fe7a6d2fc33b6795e6dd258509ff630fcad1a6a7868b4e87a89580 not found: ID does not exist" Feb 03 13:45:13 crc kubenswrapper[4820]: I0203 13:45:13.161279 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5902d12-ec3b-456b-abb8-0155be3e1619" path="/var/lib/kubelet/pods/f5902d12-ec3b-456b-abb8-0155be3e1619/volumes" Feb 03 13:45:19 crc kubenswrapper[4820]: I0203 13:45:19.144477 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:45:19 crc kubenswrapper[4820]: E0203 13:45:19.145528 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:45:26 crc kubenswrapper[4820]: I0203 13:45:26.680067 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bl7d9_a8856687-50aa-469b-acca-0c2e83d3a95a/kube-rbac-proxy/0.log" Feb 03 13:45:26 crc kubenswrapper[4820]: I0203 13:45:26.808179 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bl7d9_a8856687-50aa-469b-acca-0c2e83d3a95a/controller/0.log" Feb 03 13:45:26 crc kubenswrapper[4820]: I0203 13:45:26.935050 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-frr-files/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.130504 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-reloader/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.131666 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-metrics/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.141579 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-frr-files/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.148628 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-reloader/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.328338 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-metrics/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.340573 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-reloader/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.340671 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-metrics/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.341169 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-frr-files/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.568952 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-frr-files/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.573281 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-metrics/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.573914 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-reloader/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.592848 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/controller/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.766785 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/frr-metrics/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.779343 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/kube-rbac-proxy/0.log" Feb 03 13:45:27 crc kubenswrapper[4820]: I0203 13:45:27.827273 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/kube-rbac-proxy-frr/0.log" Feb 03 13:45:28 crc kubenswrapper[4820]: I0203 13:45:28.028827 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/reloader/0.log" Feb 03 13:45:28 crc kubenswrapper[4820]: I0203 13:45:28.068004 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-d9c5m_11969ac0-96d5-4195-bfe8-f619e11db963/frr-k8s-webhook-server/0.log" Feb 03 13:45:28 crc kubenswrapper[4820]: I0203 13:45:28.275987 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7cbbb967bd-w5q2v_15d57aea-1890-4499-9c6b-ab4af2e3715c/manager/0.log" Feb 03 13:45:28 crc kubenswrapper[4820]: I0203 13:45:28.459616 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-84b6f7d797-4wm8w_50906228-b0d7-4552-916a-b4a010b7b346/webhook-server/0.log" Feb 03 13:45:28 crc kubenswrapper[4820]: I0203 13:45:28.537093 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-scj8c_8bc51efb-561f-4e59-960c-99f18a5ef7d8/kube-rbac-proxy/0.log" Feb 03 13:45:29 crc kubenswrapper[4820]: I0203 13:45:29.578968 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-scj8c_8bc51efb-561f-4e59-960c-99f18a5ef7d8/speaker/0.log" Feb 03 13:45:29 crc kubenswrapper[4820]: I0203 13:45:29.796199 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/frr/0.log" Feb 03 13:45:34 crc kubenswrapper[4820]: I0203 13:45:34.143342 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:45:34 crc kubenswrapper[4820]: E0203 13:45:34.144747 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:45:43 crc kubenswrapper[4820]: I0203 13:45:43.070209 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx_da1615c1-bd74-4ac2-91ca-4a00a31366e6/util/0.log" Feb 03 13:45:43 crc kubenswrapper[4820]: I0203 13:45:43.298344 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx_da1615c1-bd74-4ac2-91ca-4a00a31366e6/pull/0.log" Feb 03 13:45:43 crc kubenswrapper[4820]: I0203 13:45:43.299168 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx_da1615c1-bd74-4ac2-91ca-4a00a31366e6/util/0.log" Feb 03 13:45:43 crc kubenswrapper[4820]: I0203 13:45:43.342807 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx_da1615c1-bd74-4ac2-91ca-4a00a31366e6/pull/0.log" Feb 03 13:45:43 crc kubenswrapper[4820]: I0203 13:45:43.536992 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx_da1615c1-bd74-4ac2-91ca-4a00a31366e6/util/0.log" Feb 03 13:45:43 crc kubenswrapper[4820]: I0203 13:45:43.538440 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx_da1615c1-bd74-4ac2-91ca-4a00a31366e6/extract/0.log" Feb 03 13:45:43 crc kubenswrapper[4820]: I0203 13:45:43.547597 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx_da1615c1-bd74-4ac2-91ca-4a00a31366e6/pull/0.log" Feb 03 13:45:43 crc kubenswrapper[4820]: I0203 13:45:43.764655 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh_dd3be9c9-9970-4055-b150-fb5ad093ef1e/util/0.log" Feb 03 13:45:43 crc kubenswrapper[4820]: I0203 13:45:43.921243 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh_dd3be9c9-9970-4055-b150-fb5ad093ef1e/pull/0.log" Feb 03 13:45:43 crc kubenswrapper[4820]: I0203 13:45:43.926198 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh_dd3be9c9-9970-4055-b150-fb5ad093ef1e/util/0.log" Feb 03 13:45:43 crc kubenswrapper[4820]: I0203 13:45:43.929059 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh_dd3be9c9-9970-4055-b150-fb5ad093ef1e/pull/0.log" Feb 03 13:45:44 crc kubenswrapper[4820]: I0203 13:45:44.130862 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh_dd3be9c9-9970-4055-b150-fb5ad093ef1e/util/0.log" Feb 03 13:45:44 crc kubenswrapper[4820]: I0203 13:45:44.149698 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh_dd3be9c9-9970-4055-b150-fb5ad093ef1e/pull/0.log" Feb 03 13:45:44 crc kubenswrapper[4820]: I0203 13:45:44.192240 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh_dd3be9c9-9970-4055-b150-fb5ad093ef1e/extract/0.log" Feb 03 13:45:44 crc kubenswrapper[4820]: I0203 13:45:44.311969 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt_73a0ef2f-bdcb-4042-813c-597bd2694e20/util/0.log" Feb 03 13:45:44 crc kubenswrapper[4820]: I0203 13:45:44.513043 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt_73a0ef2f-bdcb-4042-813c-597bd2694e20/pull/0.log" Feb 03 13:45:44 crc kubenswrapper[4820]: I0203 13:45:44.517550 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt_73a0ef2f-bdcb-4042-813c-597bd2694e20/util/0.log" Feb 03 13:45:44 crc kubenswrapper[4820]: I0203 13:45:44.522023 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt_73a0ef2f-bdcb-4042-813c-597bd2694e20/pull/0.log" Feb 03 13:45:44 crc kubenswrapper[4820]: I0203 13:45:44.712494 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt_73a0ef2f-bdcb-4042-813c-597bd2694e20/extract/0.log" Feb 03 13:45:44 crc kubenswrapper[4820]: I0203 13:45:44.725134 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt_73a0ef2f-bdcb-4042-813c-597bd2694e20/pull/0.log" Feb 03 13:45:44 crc kubenswrapper[4820]: I0203 13:45:44.750799 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt_73a0ef2f-bdcb-4042-813c-597bd2694e20/util/0.log" Feb 03 13:45:44 crc kubenswrapper[4820]: I0203 13:45:44.932379 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wngqc_4bd3b782-6780-4d50-9e3c-391f1930b50a/extract-utilities/0.log" Feb 03 13:45:45 crc kubenswrapper[4820]: I0203 13:45:45.123204 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wngqc_4bd3b782-6780-4d50-9e3c-391f1930b50a/extract-content/0.log" Feb 03 13:45:45 crc kubenswrapper[4820]: I0203 13:45:45.313070 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wngqc_4bd3b782-6780-4d50-9e3c-391f1930b50a/extract-utilities/0.log" Feb 03 13:45:45 crc kubenswrapper[4820]: I0203 13:45:45.350612 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wngqc_4bd3b782-6780-4d50-9e3c-391f1930b50a/extract-content/0.log" Feb 03 13:45:45 crc kubenswrapper[4820]: I0203 13:45:45.573234 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wngqc_4bd3b782-6780-4d50-9e3c-391f1930b50a/extract-content/0.log" Feb 03 13:45:45 crc kubenswrapper[4820]: I0203 13:45:45.596720 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wngqc_4bd3b782-6780-4d50-9e3c-391f1930b50a/extract-utilities/0.log" Feb 03 13:45:45 crc kubenswrapper[4820]: I0203 13:45:45.921403 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qt59j_aa46ef09-da2f-4b32-8091-4d745eff0174/extract-utilities/0.log" Feb 03 13:45:46 crc kubenswrapper[4820]: I0203 13:45:46.193271 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qt59j_aa46ef09-da2f-4b32-8091-4d745eff0174/extract-content/0.log" Feb 03 13:45:46 crc kubenswrapper[4820]: I0203 13:45:46.255542 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qt59j_aa46ef09-da2f-4b32-8091-4d745eff0174/extract-content/0.log" Feb 03 13:45:46 crc kubenswrapper[4820]: I0203 13:45:46.368403 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wngqc_4bd3b782-6780-4d50-9e3c-391f1930b50a/registry-server/0.log" Feb 03 13:45:46 crc kubenswrapper[4820]: I0203 13:45:46.428864 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qt59j_aa46ef09-da2f-4b32-8091-4d745eff0174/extract-utilities/0.log" Feb 03 13:45:46 crc kubenswrapper[4820]: I0203 13:45:46.663389 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qt59j_aa46ef09-da2f-4b32-8091-4d745eff0174/extract-utilities/0.log" Feb 03 13:45:46 crc kubenswrapper[4820]: I0203 13:45:46.691139 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qt59j_aa46ef09-da2f-4b32-8091-4d745eff0174/extract-content/0.log" Feb 03 13:45:46 crc kubenswrapper[4820]: I0203 13:45:46.966563 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-qr29p_5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738/marketplace-operator/0.log" Feb 03 13:45:47 crc kubenswrapper[4820]: I0203 13:45:47.016049 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kzpp_d2036cb3-d406-4eea-8eac-3fda178af56a/extract-utilities/0.log" Feb 03 13:45:47 crc kubenswrapper[4820]: I0203 13:45:47.245120 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kzpp_d2036cb3-d406-4eea-8eac-3fda178af56a/extract-utilities/0.log" Feb 03 13:45:47 crc kubenswrapper[4820]: I0203 13:45:47.313473 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kzpp_d2036cb3-d406-4eea-8eac-3fda178af56a/extract-content/0.log" Feb 03 13:45:47 crc kubenswrapper[4820]: I0203 13:45:47.324449 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kzpp_d2036cb3-d406-4eea-8eac-3fda178af56a/extract-content/0.log" Feb 03 13:45:47 crc kubenswrapper[4820]: I0203 13:45:47.512263 4820 scope.go:117] "RemoveContainer" containerID="fa8d40271f6afa031e706b87372fd2dec63b7292f4ec3ce299c5dd85b8f9af81" Feb 03 13:45:47 crc kubenswrapper[4820]: I0203 13:45:47.605862 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kzpp_d2036cb3-d406-4eea-8eac-3fda178af56a/extract-content/0.log" Feb 03 13:45:47 crc kubenswrapper[4820]: I0203 13:45:47.610813 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kzpp_d2036cb3-d406-4eea-8eac-3fda178af56a/extract-utilities/0.log" Feb 03 13:45:48 crc kubenswrapper[4820]: I0203 13:45:48.000837 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qt59j_aa46ef09-da2f-4b32-8091-4d745eff0174/registry-server/0.log" Feb 03 13:45:48 crc kubenswrapper[4820]: I0203 13:45:48.042614 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btszd_977ea1bf-f3ac-40c3-8061-bbf78da368c1/extract-utilities/0.log" Feb 03 13:45:48 crc kubenswrapper[4820]: I0203 13:45:48.091406 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kzpp_d2036cb3-d406-4eea-8eac-3fda178af56a/registry-server/0.log" Feb 03 13:45:48 crc kubenswrapper[4820]: I0203 13:45:48.142346 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:45:48 crc kubenswrapper[4820]: E0203 13:45:48.142826 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:45:48 crc kubenswrapper[4820]: I0203 13:45:48.253197 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btszd_977ea1bf-f3ac-40c3-8061-bbf78da368c1/extract-utilities/0.log" Feb 03 13:45:48 crc kubenswrapper[4820]: I0203 13:45:48.286591 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btszd_977ea1bf-f3ac-40c3-8061-bbf78da368c1/extract-content/0.log" Feb 03 13:45:48 crc kubenswrapper[4820]: I0203 13:45:48.287445 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btszd_977ea1bf-f3ac-40c3-8061-bbf78da368c1/extract-content/0.log" Feb 03 13:45:48 crc kubenswrapper[4820]: I0203 13:45:48.507654 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btszd_977ea1bf-f3ac-40c3-8061-bbf78da368c1/extract-content/0.log" Feb 03 13:45:48 crc kubenswrapper[4820]: I0203 13:45:48.508302 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btszd_977ea1bf-f3ac-40c3-8061-bbf78da368c1/extract-utilities/0.log" Feb 03 13:45:48 crc kubenswrapper[4820]: I0203 13:45:48.767974 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btszd_977ea1bf-f3ac-40c3-8061-bbf78da368c1/registry-server/0.log" Feb 03 13:46:00 crc kubenswrapper[4820]: I0203 13:46:00.229424 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:46:00 crc kubenswrapper[4820]: E0203 13:46:00.230456 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:46:04 crc kubenswrapper[4820]: I0203 13:46:04.936721 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m_67c9fe0e-5cc6-469b-90a0-11adfac994cc/prometheus-operator-admission-webhook/0.log" Feb 03 13:46:04 crc kubenswrapper[4820]: I0203 13:46:04.939597 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9jzgf_c1ad6c2d-5ab9-4904-9426-00ebf486a90d/prometheus-operator/0.log" Feb 03 13:46:04 crc kubenswrapper[4820]: I0203 13:46:04.945216 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv_3202dd82-6cc2-478c-9eb1-7810a23ce4bb/prometheus-operator-admission-webhook/0.log" Feb 03 13:46:05 crc kubenswrapper[4820]: I0203 13:46:05.136116 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-gx6fv_4f0df377-6a2b-4270-974f-3d178cdc47d9/perses-operator/0.log" Feb 03 13:46:05 crc kubenswrapper[4820]: I0203 13:46:05.142737 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-lshn6_c22a4473-b3ac-4b33-9a20-320b76c330ab/operator/0.log" Feb 03 13:46:15 crc kubenswrapper[4820]: I0203 13:46:15.142868 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:46:15 crc kubenswrapper[4820]: E0203 13:46:15.143722 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:46:27 crc kubenswrapper[4820]: I0203 13:46:27.145507 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:46:27 crc kubenswrapper[4820]: E0203 13:46:27.146574 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:46:41 crc kubenswrapper[4820]: I0203 13:46:41.144902 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:46:41 crc kubenswrapper[4820]: I0203 13:46:41.484166 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"ef180b5ad3083e7765ecf2a351c6b4db38618425b9e53c6f4487f37e71237324"} Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.435061 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mp5zv"] Feb 03 13:47:19 crc kubenswrapper[4820]: E0203 13:47:19.436057 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5902d12-ec3b-456b-abb8-0155be3e1619" containerName="extract-utilities" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.436077 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5902d12-ec3b-456b-abb8-0155be3e1619" containerName="extract-utilities" Feb 03 13:47:19 crc kubenswrapper[4820]: E0203 13:47:19.436099 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5902d12-ec3b-456b-abb8-0155be3e1619" containerName="registry-server" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.436105 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5902d12-ec3b-456b-abb8-0155be3e1619" containerName="registry-server" Feb 03 13:47:19 crc kubenswrapper[4820]: E0203 13:47:19.436129 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f5570a-59ee-439b-92a8-18730d2edfc5" containerName="collect-profiles" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.436134 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f5570a-59ee-439b-92a8-18730d2edfc5" containerName="collect-profiles" Feb 03 13:47:19 crc kubenswrapper[4820]: E0203 13:47:19.436150 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5902d12-ec3b-456b-abb8-0155be3e1619" containerName="extract-content" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.436156 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5902d12-ec3b-456b-abb8-0155be3e1619" containerName="extract-content" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.436432 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f5570a-59ee-439b-92a8-18730d2edfc5" containerName="collect-profiles" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.436458 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5902d12-ec3b-456b-abb8-0155be3e1619" containerName="registry-server" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.438062 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.453926 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mp5zv"] Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.606564 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-catalog-content\") pod \"certified-operators-mp5zv\" (UID: \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\") " pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.606837 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxrpk\" (UniqueName: \"kubernetes.io/projected/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-kube-api-access-hxrpk\") pod \"certified-operators-mp5zv\" (UID: \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\") " pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.607027 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-utilities\") pod \"certified-operators-mp5zv\" (UID: \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\") " pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.709320 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-catalog-content\") pod \"certified-operators-mp5zv\" (UID: \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\") " pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.709510 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxrpk\" (UniqueName: \"kubernetes.io/projected/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-kube-api-access-hxrpk\") pod \"certified-operators-mp5zv\" (UID: \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\") " pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.709546 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-utilities\") pod \"certified-operators-mp5zv\" (UID: \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\") " pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.709946 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-catalog-content\") pod \"certified-operators-mp5zv\" (UID: \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\") " pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.709990 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-utilities\") pod \"certified-operators-mp5zv\" (UID: \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\") " pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.747406 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxrpk\" (UniqueName: \"kubernetes.io/projected/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-kube-api-access-hxrpk\") pod \"certified-operators-mp5zv\" (UID: \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\") " pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:19 crc kubenswrapper[4820]: I0203 13:47:19.781786 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:20 crc kubenswrapper[4820]: I0203 13:47:20.429153 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mp5zv"] Feb 03 13:47:20 crc kubenswrapper[4820]: W0203 13:47:20.430755 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79e2c09a_eecf_4631_a14c_9e0cc8ef4cbd.slice/crio-f770a9aa553603b6dd50fb0281ba0b83b7e92302b19d0e400519ad26ee8f5cb2 WatchSource:0}: Error finding container f770a9aa553603b6dd50fb0281ba0b83b7e92302b19d0e400519ad26ee8f5cb2: Status 404 returned error can't find the container with id f770a9aa553603b6dd50fb0281ba0b83b7e92302b19d0e400519ad26ee8f5cb2 Feb 03 13:47:21 crc kubenswrapper[4820]: I0203 13:47:21.414174 4820 generic.go:334] "Generic (PLEG): container finished" podID="79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" containerID="f739770919acc36d6ca0bdf6fbbc6da561ec9014b9ed9d20dd7f84e4ad0864fd" exitCode=0 Feb 03 13:47:21 crc kubenswrapper[4820]: I0203 13:47:21.414236 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mp5zv" event={"ID":"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd","Type":"ContainerDied","Data":"f739770919acc36d6ca0bdf6fbbc6da561ec9014b9ed9d20dd7f84e4ad0864fd"} Feb 03 13:47:21 crc kubenswrapper[4820]: I0203 13:47:21.414283 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mp5zv" event={"ID":"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd","Type":"ContainerStarted","Data":"f770a9aa553603b6dd50fb0281ba0b83b7e92302b19d0e400519ad26ee8f5cb2"} Feb 03 13:47:22 crc kubenswrapper[4820]: I0203 13:47:22.425300 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mp5zv" event={"ID":"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd","Type":"ContainerStarted","Data":"0543385ed1959aeb20d3e3ef36b22d07256a5cccffb131227367210e0891bae5"} Feb 03 13:47:24 crc kubenswrapper[4820]: I0203 13:47:24.456494 4820 generic.go:334] "Generic (PLEG): container finished" podID="79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" containerID="0543385ed1959aeb20d3e3ef36b22d07256a5cccffb131227367210e0891bae5" exitCode=0 Feb 03 13:47:24 crc kubenswrapper[4820]: I0203 13:47:24.456915 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mp5zv" event={"ID":"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd","Type":"ContainerDied","Data":"0543385ed1959aeb20d3e3ef36b22d07256a5cccffb131227367210e0891bae5"} Feb 03 13:47:25 crc kubenswrapper[4820]: I0203 13:47:25.480820 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mp5zv" event={"ID":"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd","Type":"ContainerStarted","Data":"1e4c9cbf85c00f02afd0b73f36e7168e58e8b53afb614994f3b26830f04972f2"} Feb 03 13:47:25 crc kubenswrapper[4820]: I0203 13:47:25.508427 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mp5zv" podStartSLOduration=2.937871326 podStartE2EDuration="6.50836566s" podCreationTimestamp="2026-02-03 13:47:19 +0000 UTC" firstStartedPulling="2026-02-03 13:47:21.416536073 +0000 UTC m=+6158.939611957" lastFinishedPulling="2026-02-03 13:47:24.987030427 +0000 UTC m=+6162.510106291" observedRunningTime="2026-02-03 13:47:25.497232559 +0000 UTC m=+6163.020308443" watchObservedRunningTime="2026-02-03 13:47:25.50836566 +0000 UTC m=+6163.031441534" Feb 03 13:47:29 crc kubenswrapper[4820]: I0203 13:47:29.782736 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:29 crc kubenswrapper[4820]: I0203 13:47:29.785074 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:29 crc kubenswrapper[4820]: I0203 13:47:29.833267 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:30 crc kubenswrapper[4820]: I0203 13:47:30.591175 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:30 crc kubenswrapper[4820]: I0203 13:47:30.650807 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mp5zv"] Feb 03 13:47:32 crc kubenswrapper[4820]: I0203 13:47:32.541446 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mp5zv" podUID="79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" containerName="registry-server" containerID="cri-o://1e4c9cbf85c00f02afd0b73f36e7168e58e8b53afb614994f3b26830f04972f2" gracePeriod=2 Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.046791 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.131080 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxrpk\" (UniqueName: \"kubernetes.io/projected/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-kube-api-access-hxrpk\") pod \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\" (UID: \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\") " Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.131328 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-catalog-content\") pod \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\" (UID: \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\") " Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.131463 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-utilities\") pod \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\" (UID: \"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd\") " Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.133288 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-utilities" (OuterVolumeSpecName: "utilities") pod "79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" (UID: "79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.136907 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-kube-api-access-hxrpk" (OuterVolumeSpecName: "kube-api-access-hxrpk") pod "79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" (UID: "79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd"). InnerVolumeSpecName "kube-api-access-hxrpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.235492 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.235586 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxrpk\" (UniqueName: \"kubernetes.io/projected/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-kube-api-access-hxrpk\") on node \"crc\" DevicePath \"\"" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.523655 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" (UID: "79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.542770 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.555996 4820 generic.go:334] "Generic (PLEG): container finished" podID="79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" containerID="1e4c9cbf85c00f02afd0b73f36e7168e58e8b53afb614994f3b26830f04972f2" exitCode=0 Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.556125 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mp5zv" event={"ID":"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd","Type":"ContainerDied","Data":"1e4c9cbf85c00f02afd0b73f36e7168e58e8b53afb614994f3b26830f04972f2"} Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.556162 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mp5zv" event={"ID":"79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd","Type":"ContainerDied","Data":"f770a9aa553603b6dd50fb0281ba0b83b7e92302b19d0e400519ad26ee8f5cb2"} Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.556200 4820 scope.go:117] "RemoveContainer" containerID="1e4c9cbf85c00f02afd0b73f36e7168e58e8b53afb614994f3b26830f04972f2" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.556398 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mp5zv" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.598037 4820 scope.go:117] "RemoveContainer" containerID="0543385ed1959aeb20d3e3ef36b22d07256a5cccffb131227367210e0891bae5" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.603223 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mp5zv"] Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.613600 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mp5zv"] Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.631568 4820 scope.go:117] "RemoveContainer" containerID="f739770919acc36d6ca0bdf6fbbc6da561ec9014b9ed9d20dd7f84e4ad0864fd" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.676116 4820 scope.go:117] "RemoveContainer" containerID="1e4c9cbf85c00f02afd0b73f36e7168e58e8b53afb614994f3b26830f04972f2" Feb 03 13:47:33 crc kubenswrapper[4820]: E0203 13:47:33.676827 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e4c9cbf85c00f02afd0b73f36e7168e58e8b53afb614994f3b26830f04972f2\": container with ID starting with 1e4c9cbf85c00f02afd0b73f36e7168e58e8b53afb614994f3b26830f04972f2 not found: ID does not exist" containerID="1e4c9cbf85c00f02afd0b73f36e7168e58e8b53afb614994f3b26830f04972f2" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.676898 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e4c9cbf85c00f02afd0b73f36e7168e58e8b53afb614994f3b26830f04972f2"} err="failed to get container status \"1e4c9cbf85c00f02afd0b73f36e7168e58e8b53afb614994f3b26830f04972f2\": rpc error: code = NotFound desc = could not find container \"1e4c9cbf85c00f02afd0b73f36e7168e58e8b53afb614994f3b26830f04972f2\": container with ID starting with 1e4c9cbf85c00f02afd0b73f36e7168e58e8b53afb614994f3b26830f04972f2 not found: ID does not exist" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.676931 4820 scope.go:117] "RemoveContainer" containerID="0543385ed1959aeb20d3e3ef36b22d07256a5cccffb131227367210e0891bae5" Feb 03 13:47:33 crc kubenswrapper[4820]: E0203 13:47:33.677352 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0543385ed1959aeb20d3e3ef36b22d07256a5cccffb131227367210e0891bae5\": container with ID starting with 0543385ed1959aeb20d3e3ef36b22d07256a5cccffb131227367210e0891bae5 not found: ID does not exist" containerID="0543385ed1959aeb20d3e3ef36b22d07256a5cccffb131227367210e0891bae5" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.677400 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0543385ed1959aeb20d3e3ef36b22d07256a5cccffb131227367210e0891bae5"} err="failed to get container status \"0543385ed1959aeb20d3e3ef36b22d07256a5cccffb131227367210e0891bae5\": rpc error: code = NotFound desc = could not find container \"0543385ed1959aeb20d3e3ef36b22d07256a5cccffb131227367210e0891bae5\": container with ID starting with 0543385ed1959aeb20d3e3ef36b22d07256a5cccffb131227367210e0891bae5 not found: ID does not exist" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.677437 4820 scope.go:117] "RemoveContainer" containerID="f739770919acc36d6ca0bdf6fbbc6da561ec9014b9ed9d20dd7f84e4ad0864fd" Feb 03 13:47:33 crc kubenswrapper[4820]: E0203 13:47:33.678100 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f739770919acc36d6ca0bdf6fbbc6da561ec9014b9ed9d20dd7f84e4ad0864fd\": container with ID starting with f739770919acc36d6ca0bdf6fbbc6da561ec9014b9ed9d20dd7f84e4ad0864fd not found: ID does not exist" containerID="f739770919acc36d6ca0bdf6fbbc6da561ec9014b9ed9d20dd7f84e4ad0864fd" Feb 03 13:47:33 crc kubenswrapper[4820]: I0203 13:47:33.678135 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f739770919acc36d6ca0bdf6fbbc6da561ec9014b9ed9d20dd7f84e4ad0864fd"} err="failed to get container status \"f739770919acc36d6ca0bdf6fbbc6da561ec9014b9ed9d20dd7f84e4ad0864fd\": rpc error: code = NotFound desc = could not find container \"f739770919acc36d6ca0bdf6fbbc6da561ec9014b9ed9d20dd7f84e4ad0864fd\": container with ID starting with f739770919acc36d6ca0bdf6fbbc6da561ec9014b9ed9d20dd7f84e4ad0864fd not found: ID does not exist" Feb 03 13:47:35 crc kubenswrapper[4820]: I0203 13:47:35.166864 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" path="/var/lib/kubelet/pods/79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd/volumes" Feb 03 13:47:47 crc kubenswrapper[4820]: I0203 13:47:47.653899 4820 scope.go:117] "RemoveContainer" containerID="f082e366f8b604cd9b7ebcb23f8ef2298dc3a62405a760c692b8f3f2031a871c" Feb 03 13:48:22 crc kubenswrapper[4820]: I0203 13:48:22.321074 4820 generic.go:334] "Generic (PLEG): container finished" podID="f283b2fc-d781-41fe-a1c4-c5292263d7d6" containerID="4a995ac8d34acdc3c75578cb20000ead2643c11a46b2c951310ba3c7ecf412a5" exitCode=0 Feb 03 13:48:22 crc kubenswrapper[4820]: I0203 13:48:22.321196 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gr579/must-gather-jbg9l" event={"ID":"f283b2fc-d781-41fe-a1c4-c5292263d7d6","Type":"ContainerDied","Data":"4a995ac8d34acdc3c75578cb20000ead2643c11a46b2c951310ba3c7ecf412a5"} Feb 03 13:48:22 crc kubenswrapper[4820]: I0203 13:48:22.323038 4820 scope.go:117] "RemoveContainer" containerID="4a995ac8d34acdc3c75578cb20000ead2643c11a46b2c951310ba3c7ecf412a5" Feb 03 13:48:23 crc kubenswrapper[4820]: I0203 13:48:23.211679 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gr579_must-gather-jbg9l_f283b2fc-d781-41fe-a1c4-c5292263d7d6/gather/0.log" Feb 03 13:48:26 crc kubenswrapper[4820]: E0203 13:48:26.083835 4820 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.147:47360->38.102.83.147:42379: write tcp 38.102.83.147:47360->38.102.83.147:42379: write: broken pipe Feb 03 13:48:32 crc kubenswrapper[4820]: I0203 13:48:32.182669 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gr579/must-gather-jbg9l"] Feb 03 13:48:32 crc kubenswrapper[4820]: I0203 13:48:32.183598 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-gr579/must-gather-jbg9l" podUID="f283b2fc-d781-41fe-a1c4-c5292263d7d6" containerName="copy" containerID="cri-o://2056709d07e93f5108e412723dafb8578771088e2eae937a46eb87961321fb0e" gracePeriod=2 Feb 03 13:48:32 crc kubenswrapper[4820]: I0203 13:48:32.190533 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gr579/must-gather-jbg9l"] Feb 03 13:48:32 crc kubenswrapper[4820]: I0203 13:48:32.454272 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gr579_must-gather-jbg9l_f283b2fc-d781-41fe-a1c4-c5292263d7d6/copy/0.log" Feb 03 13:48:32 crc kubenswrapper[4820]: I0203 13:48:32.458289 4820 generic.go:334] "Generic (PLEG): container finished" podID="f283b2fc-d781-41fe-a1c4-c5292263d7d6" containerID="2056709d07e93f5108e412723dafb8578771088e2eae937a46eb87961321fb0e" exitCode=143 Feb 03 13:48:32 crc kubenswrapper[4820]: I0203 13:48:32.667138 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gr579_must-gather-jbg9l_f283b2fc-d781-41fe-a1c4-c5292263d7d6/copy/0.log" Feb 03 13:48:32 crc kubenswrapper[4820]: I0203 13:48:32.667930 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/must-gather-jbg9l" Feb 03 13:48:32 crc kubenswrapper[4820]: I0203 13:48:32.864531 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f283b2fc-d781-41fe-a1c4-c5292263d7d6-must-gather-output\") pod \"f283b2fc-d781-41fe-a1c4-c5292263d7d6\" (UID: \"f283b2fc-d781-41fe-a1c4-c5292263d7d6\") " Feb 03 13:48:32 crc kubenswrapper[4820]: I0203 13:48:32.864657 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt4q8\" (UniqueName: \"kubernetes.io/projected/f283b2fc-d781-41fe-a1c4-c5292263d7d6-kube-api-access-nt4q8\") pod \"f283b2fc-d781-41fe-a1c4-c5292263d7d6\" (UID: \"f283b2fc-d781-41fe-a1c4-c5292263d7d6\") " Feb 03 13:48:32 crc kubenswrapper[4820]: I0203 13:48:32.872324 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f283b2fc-d781-41fe-a1c4-c5292263d7d6-kube-api-access-nt4q8" (OuterVolumeSpecName: "kube-api-access-nt4q8") pod "f283b2fc-d781-41fe-a1c4-c5292263d7d6" (UID: "f283b2fc-d781-41fe-a1c4-c5292263d7d6"). InnerVolumeSpecName "kube-api-access-nt4q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:48:32 crc kubenswrapper[4820]: I0203 13:48:32.967435 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt4q8\" (UniqueName: \"kubernetes.io/projected/f283b2fc-d781-41fe-a1c4-c5292263d7d6-kube-api-access-nt4q8\") on node \"crc\" DevicePath \"\"" Feb 03 13:48:33 crc kubenswrapper[4820]: I0203 13:48:33.075309 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f283b2fc-d781-41fe-a1c4-c5292263d7d6-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f283b2fc-d781-41fe-a1c4-c5292263d7d6" (UID: "f283b2fc-d781-41fe-a1c4-c5292263d7d6"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:48:33 crc kubenswrapper[4820]: I0203 13:48:33.159438 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f283b2fc-d781-41fe-a1c4-c5292263d7d6" path="/var/lib/kubelet/pods/f283b2fc-d781-41fe-a1c4-c5292263d7d6/volumes" Feb 03 13:48:33 crc kubenswrapper[4820]: I0203 13:48:33.171018 4820 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f283b2fc-d781-41fe-a1c4-c5292263d7d6-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 03 13:48:33 crc kubenswrapper[4820]: I0203 13:48:33.477018 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gr579_must-gather-jbg9l_f283b2fc-d781-41fe-a1c4-c5292263d7d6/copy/0.log" Feb 03 13:48:33 crc kubenswrapper[4820]: I0203 13:48:33.477697 4820 scope.go:117] "RemoveContainer" containerID="2056709d07e93f5108e412723dafb8578771088e2eae937a46eb87961321fb0e" Feb 03 13:48:33 crc kubenswrapper[4820]: I0203 13:48:33.477864 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gr579/must-gather-jbg9l" Feb 03 13:48:33 crc kubenswrapper[4820]: I0203 13:48:33.509647 4820 scope.go:117] "RemoveContainer" containerID="4a995ac8d34acdc3c75578cb20000ead2643c11a46b2c951310ba3c7ecf412a5" Feb 03 13:48:47 crc kubenswrapper[4820]: I0203 13:48:47.747839 4820 scope.go:117] "RemoveContainer" containerID="7f792cc2463c42152b30f3c0ec82a36260e11b3d6099bbf285a22f2710a85928" Feb 03 13:49:01 crc kubenswrapper[4820]: I0203 13:49:01.366014 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:49:01 crc kubenswrapper[4820]: I0203 13:49:01.366497 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:49:31 crc kubenswrapper[4820]: I0203 13:49:31.365648 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:49:31 crc kubenswrapper[4820]: I0203 13:49:31.366123 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:50:01 crc kubenswrapper[4820]: I0203 13:50:01.365423 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:50:01 crc kubenswrapper[4820]: I0203 13:50:01.367249 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:50:01 crc kubenswrapper[4820]: I0203 13:50:01.367384 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 13:50:01 crc kubenswrapper[4820]: I0203 13:50:01.368788 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef180b5ad3083e7765ecf2a351c6b4db38618425b9e53c6f4487f37e71237324"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:50:01 crc kubenswrapper[4820]: I0203 13:50:01.368966 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://ef180b5ad3083e7765ecf2a351c6b4db38618425b9e53c6f4487f37e71237324" gracePeriod=600 Feb 03 13:50:01 crc kubenswrapper[4820]: I0203 13:50:01.522739 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="ef180b5ad3083e7765ecf2a351c6b4db38618425b9e53c6f4487f37e71237324" exitCode=0 Feb 03 13:50:01 crc kubenswrapper[4820]: I0203 13:50:01.523069 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"ef180b5ad3083e7765ecf2a351c6b4db38618425b9e53c6f4487f37e71237324"} Feb 03 13:50:01 crc kubenswrapper[4820]: I0203 13:50:01.523172 4820 scope.go:117] "RemoveContainer" containerID="ce5b3f4ddc52fde90b7cdab55ef15a30de9daf09fd19f6b7818c0bdb8e032e6a" Feb 03 13:50:01 crc kubenswrapper[4820]: E0203 13:50:01.559271 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c02def6_29f2_448e_80ec_0c8ee290f053.slice/crio-ef180b5ad3083e7765ecf2a351c6b4db38618425b9e53c6f4487f37e71237324.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c02def6_29f2_448e_80ec_0c8ee290f053.slice/crio-conmon-ef180b5ad3083e7765ecf2a351c6b4db38618425b9e53c6f4487f37e71237324.scope\": RecentStats: unable to find data in memory cache]" Feb 03 13:50:02 crc kubenswrapper[4820]: I0203 13:50:02.536213 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e"} Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.021831 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v75kg"] Feb 03 13:50:16 crc kubenswrapper[4820]: E0203 13:50:16.023048 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" containerName="extract-content" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.023069 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" containerName="extract-content" Feb 03 13:50:16 crc kubenswrapper[4820]: E0203 13:50:16.023090 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" containerName="extract-utilities" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.023097 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" containerName="extract-utilities" Feb 03 13:50:16 crc kubenswrapper[4820]: E0203 13:50:16.023129 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f283b2fc-d781-41fe-a1c4-c5292263d7d6" containerName="copy" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.023135 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f283b2fc-d781-41fe-a1c4-c5292263d7d6" containerName="copy" Feb 03 13:50:16 crc kubenswrapper[4820]: E0203 13:50:16.023144 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" containerName="registry-server" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.023150 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" containerName="registry-server" Feb 03 13:50:16 crc kubenswrapper[4820]: E0203 13:50:16.023162 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f283b2fc-d781-41fe-a1c4-c5292263d7d6" containerName="gather" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.023168 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f283b2fc-d781-41fe-a1c4-c5292263d7d6" containerName="gather" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.023393 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f283b2fc-d781-41fe-a1c4-c5292263d7d6" containerName="copy" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.023410 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e2c09a-eecf-4631-a14c-9e0cc8ef4cbd" containerName="registry-server" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.023435 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f283b2fc-d781-41fe-a1c4-c5292263d7d6" containerName="gather" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.026934 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.032525 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v75kg"] Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.149009 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm7vm\" (UniqueName: \"kubernetes.io/projected/98ad2822-7248-4f75-ad16-a41b2063efa4-kube-api-access-rm7vm\") pod \"community-operators-v75kg\" (UID: \"98ad2822-7248-4f75-ad16-a41b2063efa4\") " pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.159586 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98ad2822-7248-4f75-ad16-a41b2063efa4-utilities\") pod \"community-operators-v75kg\" (UID: \"98ad2822-7248-4f75-ad16-a41b2063efa4\") " pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.159762 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98ad2822-7248-4f75-ad16-a41b2063efa4-catalog-content\") pod \"community-operators-v75kg\" (UID: \"98ad2822-7248-4f75-ad16-a41b2063efa4\") " pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.261760 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm7vm\" (UniqueName: \"kubernetes.io/projected/98ad2822-7248-4f75-ad16-a41b2063efa4-kube-api-access-rm7vm\") pod \"community-operators-v75kg\" (UID: \"98ad2822-7248-4f75-ad16-a41b2063efa4\") " pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.261823 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98ad2822-7248-4f75-ad16-a41b2063efa4-utilities\") pod \"community-operators-v75kg\" (UID: \"98ad2822-7248-4f75-ad16-a41b2063efa4\") " pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.261920 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98ad2822-7248-4f75-ad16-a41b2063efa4-catalog-content\") pod \"community-operators-v75kg\" (UID: \"98ad2822-7248-4f75-ad16-a41b2063efa4\") " pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.263707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98ad2822-7248-4f75-ad16-a41b2063efa4-utilities\") pod \"community-operators-v75kg\" (UID: \"98ad2822-7248-4f75-ad16-a41b2063efa4\") " pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.264044 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98ad2822-7248-4f75-ad16-a41b2063efa4-catalog-content\") pod \"community-operators-v75kg\" (UID: \"98ad2822-7248-4f75-ad16-a41b2063efa4\") " pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.284330 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm7vm\" (UniqueName: \"kubernetes.io/projected/98ad2822-7248-4f75-ad16-a41b2063efa4-kube-api-access-rm7vm\") pod \"community-operators-v75kg\" (UID: \"98ad2822-7248-4f75-ad16-a41b2063efa4\") " pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.362421 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:16 crc kubenswrapper[4820]: I0203 13:50:16.943832 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v75kg"] Feb 03 13:50:16 crc kubenswrapper[4820]: W0203 13:50:16.952408 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98ad2822_7248_4f75_ad16_a41b2063efa4.slice/crio-cf1ac7959a9b24fe51b0e589476e66bd132aa660e5159af903bd986fe87bfaac WatchSource:0}: Error finding container cf1ac7959a9b24fe51b0e589476e66bd132aa660e5159af903bd986fe87bfaac: Status 404 returned error can't find the container with id cf1ac7959a9b24fe51b0e589476e66bd132aa660e5159af903bd986fe87bfaac Feb 03 13:50:17 crc kubenswrapper[4820]: I0203 13:50:17.718201 4820 generic.go:334] "Generic (PLEG): container finished" podID="98ad2822-7248-4f75-ad16-a41b2063efa4" containerID="4e8045eaf8edbf46d18feb463d19b9e4d0fcee1ba9c2f74c6f5acad438347927" exitCode=0 Feb 03 13:50:17 crc kubenswrapper[4820]: I0203 13:50:17.718312 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v75kg" event={"ID":"98ad2822-7248-4f75-ad16-a41b2063efa4","Type":"ContainerDied","Data":"4e8045eaf8edbf46d18feb463d19b9e4d0fcee1ba9c2f74c6f5acad438347927"} Feb 03 13:50:17 crc kubenswrapper[4820]: I0203 13:50:17.718531 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v75kg" event={"ID":"98ad2822-7248-4f75-ad16-a41b2063efa4","Type":"ContainerStarted","Data":"cf1ac7959a9b24fe51b0e589476e66bd132aa660e5159af903bd986fe87bfaac"} Feb 03 13:50:17 crc kubenswrapper[4820]: I0203 13:50:17.721239 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:50:18 crc kubenswrapper[4820]: I0203 13:50:18.731525 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v75kg" event={"ID":"98ad2822-7248-4f75-ad16-a41b2063efa4","Type":"ContainerStarted","Data":"2f53e8e131ed5ae549c5266643e55a645f862117ca0a2870c183881b6381019b"} Feb 03 13:50:20 crc kubenswrapper[4820]: I0203 13:50:20.755061 4820 generic.go:334] "Generic (PLEG): container finished" podID="98ad2822-7248-4f75-ad16-a41b2063efa4" containerID="2f53e8e131ed5ae549c5266643e55a645f862117ca0a2870c183881b6381019b" exitCode=0 Feb 03 13:50:20 crc kubenswrapper[4820]: I0203 13:50:20.755134 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v75kg" event={"ID":"98ad2822-7248-4f75-ad16-a41b2063efa4","Type":"ContainerDied","Data":"2f53e8e131ed5ae549c5266643e55a645f862117ca0a2870c183881b6381019b"} Feb 03 13:50:22 crc kubenswrapper[4820]: I0203 13:50:22.780057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v75kg" event={"ID":"98ad2822-7248-4f75-ad16-a41b2063efa4","Type":"ContainerStarted","Data":"836909a7b24b21181b9f0588d54332024fe5526eafea81cf354f23c6fa144fc8"} Feb 03 13:50:22 crc kubenswrapper[4820]: I0203 13:50:22.800774 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v75kg" podStartSLOduration=3.840592266 podStartE2EDuration="7.800740109s" podCreationTimestamp="2026-02-03 13:50:15 +0000 UTC" firstStartedPulling="2026-02-03 13:50:17.720013582 +0000 UTC m=+6335.243089446" lastFinishedPulling="2026-02-03 13:50:21.680161425 +0000 UTC m=+6339.203237289" observedRunningTime="2026-02-03 13:50:22.799396183 +0000 UTC m=+6340.322472127" watchObservedRunningTime="2026-02-03 13:50:22.800740109 +0000 UTC m=+6340.323816033" Feb 03 13:50:26 crc kubenswrapper[4820]: I0203 13:50:26.363219 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:26 crc kubenswrapper[4820]: I0203 13:50:26.363542 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:26 crc kubenswrapper[4820]: I0203 13:50:26.414723 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:26 crc kubenswrapper[4820]: I0203 13:50:26.883165 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:26 crc kubenswrapper[4820]: I0203 13:50:26.937556 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v75kg"] Feb 03 13:50:28 crc kubenswrapper[4820]: I0203 13:50:28.852629 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v75kg" podUID="98ad2822-7248-4f75-ad16-a41b2063efa4" containerName="registry-server" containerID="cri-o://836909a7b24b21181b9f0588d54332024fe5526eafea81cf354f23c6fa144fc8" gracePeriod=2 Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.415520 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.585651 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm7vm\" (UniqueName: \"kubernetes.io/projected/98ad2822-7248-4f75-ad16-a41b2063efa4-kube-api-access-rm7vm\") pod \"98ad2822-7248-4f75-ad16-a41b2063efa4\" (UID: \"98ad2822-7248-4f75-ad16-a41b2063efa4\") " Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.586310 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98ad2822-7248-4f75-ad16-a41b2063efa4-catalog-content\") pod \"98ad2822-7248-4f75-ad16-a41b2063efa4\" (UID: \"98ad2822-7248-4f75-ad16-a41b2063efa4\") " Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.586716 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98ad2822-7248-4f75-ad16-a41b2063efa4-utilities\") pod \"98ad2822-7248-4f75-ad16-a41b2063efa4\" (UID: \"98ad2822-7248-4f75-ad16-a41b2063efa4\") " Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.588514 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98ad2822-7248-4f75-ad16-a41b2063efa4-utilities" (OuterVolumeSpecName: "utilities") pod "98ad2822-7248-4f75-ad16-a41b2063efa4" (UID: "98ad2822-7248-4f75-ad16-a41b2063efa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.595111 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ad2822-7248-4f75-ad16-a41b2063efa4-kube-api-access-rm7vm" (OuterVolumeSpecName: "kube-api-access-rm7vm") pod "98ad2822-7248-4f75-ad16-a41b2063efa4" (UID: "98ad2822-7248-4f75-ad16-a41b2063efa4"). InnerVolumeSpecName "kube-api-access-rm7vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.689536 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98ad2822-7248-4f75-ad16-a41b2063efa4-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.689583 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm7vm\" (UniqueName: \"kubernetes.io/projected/98ad2822-7248-4f75-ad16-a41b2063efa4-kube-api-access-rm7vm\") on node \"crc\" DevicePath \"\"" Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.865339 4820 generic.go:334] "Generic (PLEG): container finished" podID="98ad2822-7248-4f75-ad16-a41b2063efa4" containerID="836909a7b24b21181b9f0588d54332024fe5526eafea81cf354f23c6fa144fc8" exitCode=0 Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.865380 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v75kg" event={"ID":"98ad2822-7248-4f75-ad16-a41b2063efa4","Type":"ContainerDied","Data":"836909a7b24b21181b9f0588d54332024fe5526eafea81cf354f23c6fa144fc8"} Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.865406 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v75kg" event={"ID":"98ad2822-7248-4f75-ad16-a41b2063efa4","Type":"ContainerDied","Data":"cf1ac7959a9b24fe51b0e589476e66bd132aa660e5159af903bd986fe87bfaac"} Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.865425 4820 scope.go:117] "RemoveContainer" containerID="836909a7b24b21181b9f0588d54332024fe5526eafea81cf354f23c6fa144fc8" Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.865505 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v75kg" Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.891347 4820 scope.go:117] "RemoveContainer" containerID="2f53e8e131ed5ae549c5266643e55a645f862117ca0a2870c183881b6381019b" Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.912481 4820 scope.go:117] "RemoveContainer" containerID="4e8045eaf8edbf46d18feb463d19b9e4d0fcee1ba9c2f74c6f5acad438347927" Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.968665 4820 scope.go:117] "RemoveContainer" containerID="836909a7b24b21181b9f0588d54332024fe5526eafea81cf354f23c6fa144fc8" Feb 03 13:50:29 crc kubenswrapper[4820]: E0203 13:50:29.969191 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"836909a7b24b21181b9f0588d54332024fe5526eafea81cf354f23c6fa144fc8\": container with ID starting with 836909a7b24b21181b9f0588d54332024fe5526eafea81cf354f23c6fa144fc8 not found: ID does not exist" containerID="836909a7b24b21181b9f0588d54332024fe5526eafea81cf354f23c6fa144fc8" Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.969240 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"836909a7b24b21181b9f0588d54332024fe5526eafea81cf354f23c6fa144fc8"} err="failed to get container status \"836909a7b24b21181b9f0588d54332024fe5526eafea81cf354f23c6fa144fc8\": rpc error: code = NotFound desc = could not find container \"836909a7b24b21181b9f0588d54332024fe5526eafea81cf354f23c6fa144fc8\": container with ID starting with 836909a7b24b21181b9f0588d54332024fe5526eafea81cf354f23c6fa144fc8 not found: ID does not exist" Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.969271 4820 scope.go:117] "RemoveContainer" containerID="2f53e8e131ed5ae549c5266643e55a645f862117ca0a2870c183881b6381019b" Feb 03 13:50:29 crc kubenswrapper[4820]: E0203 13:50:29.969822 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f53e8e131ed5ae549c5266643e55a645f862117ca0a2870c183881b6381019b\": container with ID starting with 2f53e8e131ed5ae549c5266643e55a645f862117ca0a2870c183881b6381019b not found: ID does not exist" containerID="2f53e8e131ed5ae549c5266643e55a645f862117ca0a2870c183881b6381019b" Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.969848 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f53e8e131ed5ae549c5266643e55a645f862117ca0a2870c183881b6381019b"} err="failed to get container status \"2f53e8e131ed5ae549c5266643e55a645f862117ca0a2870c183881b6381019b\": rpc error: code = NotFound desc = could not find container \"2f53e8e131ed5ae549c5266643e55a645f862117ca0a2870c183881b6381019b\": container with ID starting with 2f53e8e131ed5ae549c5266643e55a645f862117ca0a2870c183881b6381019b not found: ID does not exist" Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.969860 4820 scope.go:117] "RemoveContainer" containerID="4e8045eaf8edbf46d18feb463d19b9e4d0fcee1ba9c2f74c6f5acad438347927" Feb 03 13:50:29 crc kubenswrapper[4820]: E0203 13:50:29.970273 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e8045eaf8edbf46d18feb463d19b9e4d0fcee1ba9c2f74c6f5acad438347927\": container with ID starting with 4e8045eaf8edbf46d18feb463d19b9e4d0fcee1ba9c2f74c6f5acad438347927 not found: ID does not exist" containerID="4e8045eaf8edbf46d18feb463d19b9e4d0fcee1ba9c2f74c6f5acad438347927" Feb 03 13:50:29 crc kubenswrapper[4820]: I0203 13:50:29.970342 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e8045eaf8edbf46d18feb463d19b9e4d0fcee1ba9c2f74c6f5acad438347927"} err="failed to get container status \"4e8045eaf8edbf46d18feb463d19b9e4d0fcee1ba9c2f74c6f5acad438347927\": rpc error: code = NotFound desc = could not find container \"4e8045eaf8edbf46d18feb463d19b9e4d0fcee1ba9c2f74c6f5acad438347927\": container with ID starting with 4e8045eaf8edbf46d18feb463d19b9e4d0fcee1ba9c2f74c6f5acad438347927 not found: ID does not exist" Feb 03 13:50:30 crc kubenswrapper[4820]: I0203 13:50:30.271331 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98ad2822-7248-4f75-ad16-a41b2063efa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98ad2822-7248-4f75-ad16-a41b2063efa4" (UID: "98ad2822-7248-4f75-ad16-a41b2063efa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:50:30 crc kubenswrapper[4820]: I0203 13:50:30.302309 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98ad2822-7248-4f75-ad16-a41b2063efa4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:50:30 crc kubenswrapper[4820]: I0203 13:50:30.515317 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v75kg"] Feb 03 13:50:30 crc kubenswrapper[4820]: I0203 13:50:30.534986 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v75kg"] Feb 03 13:50:31 crc kubenswrapper[4820]: I0203 13:50:31.154369 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98ad2822-7248-4f75-ad16-a41b2063efa4" path="/var/lib/kubelet/pods/98ad2822-7248-4f75-ad16-a41b2063efa4/volumes" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.184038 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hqcpm"] Feb 03 13:51:15 crc kubenswrapper[4820]: E0203 13:51:15.185481 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ad2822-7248-4f75-ad16-a41b2063efa4" containerName="extract-content" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.185525 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ad2822-7248-4f75-ad16-a41b2063efa4" containerName="extract-content" Feb 03 13:51:15 crc kubenswrapper[4820]: E0203 13:51:15.185563 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ad2822-7248-4f75-ad16-a41b2063efa4" containerName="registry-server" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.185572 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ad2822-7248-4f75-ad16-a41b2063efa4" containerName="registry-server" Feb 03 13:51:15 crc kubenswrapper[4820]: E0203 13:51:15.185610 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ad2822-7248-4f75-ad16-a41b2063efa4" containerName="extract-utilities" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.185619 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ad2822-7248-4f75-ad16-a41b2063efa4" containerName="extract-utilities" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.186000 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="98ad2822-7248-4f75-ad16-a41b2063efa4" containerName="registry-server" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.188258 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.197259 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqcpm"] Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.445516 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtmlc\" (UniqueName: \"kubernetes.io/projected/b89c4015-df51-4565-8c25-e9b7a34d1b4a-kube-api-access-mtmlc\") pod \"redhat-marketplace-hqcpm\" (UID: \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\") " pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.446036 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b89c4015-df51-4565-8c25-e9b7a34d1b4a-utilities\") pod \"redhat-marketplace-hqcpm\" (UID: \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\") " pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.446285 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b89c4015-df51-4565-8c25-e9b7a34d1b4a-catalog-content\") pod \"redhat-marketplace-hqcpm\" (UID: \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\") " pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.550289 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b89c4015-df51-4565-8c25-e9b7a34d1b4a-utilities\") pod \"redhat-marketplace-hqcpm\" (UID: \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\") " pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.550436 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b89c4015-df51-4565-8c25-e9b7a34d1b4a-catalog-content\") pod \"redhat-marketplace-hqcpm\" (UID: \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\") " pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.550497 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtmlc\" (UniqueName: \"kubernetes.io/projected/b89c4015-df51-4565-8c25-e9b7a34d1b4a-kube-api-access-mtmlc\") pod \"redhat-marketplace-hqcpm\" (UID: \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\") " pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.550913 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b89c4015-df51-4565-8c25-e9b7a34d1b4a-utilities\") pod \"redhat-marketplace-hqcpm\" (UID: \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\") " pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.550948 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b89c4015-df51-4565-8c25-e9b7a34d1b4a-catalog-content\") pod \"redhat-marketplace-hqcpm\" (UID: \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\") " pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.571930 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtmlc\" (UniqueName: \"kubernetes.io/projected/b89c4015-df51-4565-8c25-e9b7a34d1b4a-kube-api-access-mtmlc\") pod \"redhat-marketplace-hqcpm\" (UID: \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\") " pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:15 crc kubenswrapper[4820]: I0203 13:51:15.826086 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:16 crc kubenswrapper[4820]: I0203 13:51:16.388111 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqcpm"] Feb 03 13:51:16 crc kubenswrapper[4820]: W0203 13:51:16.404498 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb89c4015_df51_4565_8c25_e9b7a34d1b4a.slice/crio-5c3ef280754d0ec9bf20c674f10a79555d6cd623f317352ce715cdb1c20202a1 WatchSource:0}: Error finding container 5c3ef280754d0ec9bf20c674f10a79555d6cd623f317352ce715cdb1c20202a1: Status 404 returned error can't find the container with id 5c3ef280754d0ec9bf20c674f10a79555d6cd623f317352ce715cdb1c20202a1 Feb 03 13:51:16 crc kubenswrapper[4820]: I0203 13:51:16.934460 4820 generic.go:334] "Generic (PLEG): container finished" podID="b89c4015-df51-4565-8c25-e9b7a34d1b4a" containerID="4e270cf05f07582a1dc0ff518f6623716adc9ff012f36728f75f8e4f9e16dcba" exitCode=0 Feb 03 13:51:16 crc kubenswrapper[4820]: I0203 13:51:16.934527 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqcpm" event={"ID":"b89c4015-df51-4565-8c25-e9b7a34d1b4a","Type":"ContainerDied","Data":"4e270cf05f07582a1dc0ff518f6623716adc9ff012f36728f75f8e4f9e16dcba"} Feb 03 13:51:16 crc kubenswrapper[4820]: I0203 13:51:16.934767 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqcpm" event={"ID":"b89c4015-df51-4565-8c25-e9b7a34d1b4a","Type":"ContainerStarted","Data":"5c3ef280754d0ec9bf20c674f10a79555d6cd623f317352ce715cdb1c20202a1"} Feb 03 13:51:18 crc kubenswrapper[4820]: I0203 13:51:18.958076 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqcpm" event={"ID":"b89c4015-df51-4565-8c25-e9b7a34d1b4a","Type":"ContainerStarted","Data":"5f232e8be0cfd7d88ec98ea589c642b5c6eb74b3cb7dbe9d93013d95c0469658"} Feb 03 13:51:20 crc kubenswrapper[4820]: I0203 13:51:20.979880 4820 generic.go:334] "Generic (PLEG): container finished" podID="b89c4015-df51-4565-8c25-e9b7a34d1b4a" containerID="5f232e8be0cfd7d88ec98ea589c642b5c6eb74b3cb7dbe9d93013d95c0469658" exitCode=0 Feb 03 13:51:20 crc kubenswrapper[4820]: I0203 13:51:20.979961 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqcpm" event={"ID":"b89c4015-df51-4565-8c25-e9b7a34d1b4a","Type":"ContainerDied","Data":"5f232e8be0cfd7d88ec98ea589c642b5c6eb74b3cb7dbe9d93013d95c0469658"} Feb 03 13:51:21 crc kubenswrapper[4820]: I0203 13:51:21.992250 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqcpm" event={"ID":"b89c4015-df51-4565-8c25-e9b7a34d1b4a","Type":"ContainerStarted","Data":"596d3edc3a3351d7e0aac3afe80fb2c0ef4ab2d2ff15d4150070a30d7175d0e0"} Feb 03 13:51:22 crc kubenswrapper[4820]: I0203 13:51:22.012863 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hqcpm" podStartSLOduration=2.565555043 podStartE2EDuration="7.012841723s" podCreationTimestamp="2026-02-03 13:51:15 +0000 UTC" firstStartedPulling="2026-02-03 13:51:16.936330021 +0000 UTC m=+6394.459405885" lastFinishedPulling="2026-02-03 13:51:21.383616701 +0000 UTC m=+6398.906692565" observedRunningTime="2026-02-03 13:51:22.009629707 +0000 UTC m=+6399.532705581" watchObservedRunningTime="2026-02-03 13:51:22.012841723 +0000 UTC m=+6399.535917587" Feb 03 13:51:25 crc kubenswrapper[4820]: I0203 13:51:25.827522 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:25 crc kubenswrapper[4820]: I0203 13:51:25.830458 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:25 crc kubenswrapper[4820]: I0203 13:51:25.902540 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:26 crc kubenswrapper[4820]: I0203 13:51:26.304705 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:26 crc kubenswrapper[4820]: I0203 13:51:26.359326 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqcpm"] Feb 03 13:51:28 crc kubenswrapper[4820]: I0203 13:51:28.278117 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hqcpm" podUID="b89c4015-df51-4565-8c25-e9b7a34d1b4a" containerName="registry-server" containerID="cri-o://596d3edc3a3351d7e0aac3afe80fb2c0ef4ab2d2ff15d4150070a30d7175d0e0" gracePeriod=2 Feb 03 13:51:28 crc kubenswrapper[4820]: I0203 13:51:28.734405 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:28 crc kubenswrapper[4820]: I0203 13:51:28.835744 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtmlc\" (UniqueName: \"kubernetes.io/projected/b89c4015-df51-4565-8c25-e9b7a34d1b4a-kube-api-access-mtmlc\") pod \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\" (UID: \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\") " Feb 03 13:51:28 crc kubenswrapper[4820]: I0203 13:51:28.835939 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b89c4015-df51-4565-8c25-e9b7a34d1b4a-catalog-content\") pod \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\" (UID: \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\") " Feb 03 13:51:28 crc kubenswrapper[4820]: I0203 13:51:28.837066 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b89c4015-df51-4565-8c25-e9b7a34d1b4a-utilities\") pod \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\" (UID: \"b89c4015-df51-4565-8c25-e9b7a34d1b4a\") " Feb 03 13:51:28 crc kubenswrapper[4820]: I0203 13:51:28.838335 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b89c4015-df51-4565-8c25-e9b7a34d1b4a-utilities" (OuterVolumeSpecName: "utilities") pod "b89c4015-df51-4565-8c25-e9b7a34d1b4a" (UID: "b89c4015-df51-4565-8c25-e9b7a34d1b4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:51:28 crc kubenswrapper[4820]: I0203 13:51:28.841194 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b89c4015-df51-4565-8c25-e9b7a34d1b4a-kube-api-access-mtmlc" (OuterVolumeSpecName: "kube-api-access-mtmlc") pod "b89c4015-df51-4565-8c25-e9b7a34d1b4a" (UID: "b89c4015-df51-4565-8c25-e9b7a34d1b4a"). InnerVolumeSpecName "kube-api-access-mtmlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:51:28 crc kubenswrapper[4820]: I0203 13:51:28.866029 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b89c4015-df51-4565-8c25-e9b7a34d1b4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b89c4015-df51-4565-8c25-e9b7a34d1b4a" (UID: "b89c4015-df51-4565-8c25-e9b7a34d1b4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:51:28 crc kubenswrapper[4820]: I0203 13:51:28.941289 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b89c4015-df51-4565-8c25-e9b7a34d1b4a-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:51:28 crc kubenswrapper[4820]: I0203 13:51:28.941729 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtmlc\" (UniqueName: \"kubernetes.io/projected/b89c4015-df51-4565-8c25-e9b7a34d1b4a-kube-api-access-mtmlc\") on node \"crc\" DevicePath \"\"" Feb 03 13:51:28 crc kubenswrapper[4820]: I0203 13:51:28.941780 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b89c4015-df51-4565-8c25-e9b7a34d1b4a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.288993 4820 generic.go:334] "Generic (PLEG): container finished" podID="b89c4015-df51-4565-8c25-e9b7a34d1b4a" containerID="596d3edc3a3351d7e0aac3afe80fb2c0ef4ab2d2ff15d4150070a30d7175d0e0" exitCode=0 Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.289055 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqcpm" event={"ID":"b89c4015-df51-4565-8c25-e9b7a34d1b4a","Type":"ContainerDied","Data":"596d3edc3a3351d7e0aac3afe80fb2c0ef4ab2d2ff15d4150070a30d7175d0e0"} Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.289091 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqcpm" event={"ID":"b89c4015-df51-4565-8c25-e9b7a34d1b4a","Type":"ContainerDied","Data":"5c3ef280754d0ec9bf20c674f10a79555d6cd623f317352ce715cdb1c20202a1"} Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.289100 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqcpm" Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.289110 4820 scope.go:117] "RemoveContainer" containerID="596d3edc3a3351d7e0aac3afe80fb2c0ef4ab2d2ff15d4150070a30d7175d0e0" Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.337174 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqcpm"] Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.338425 4820 scope.go:117] "RemoveContainer" containerID="5f232e8be0cfd7d88ec98ea589c642b5c6eb74b3cb7dbe9d93013d95c0469658" Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.358875 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqcpm"] Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.361270 4820 scope.go:117] "RemoveContainer" containerID="4e270cf05f07582a1dc0ff518f6623716adc9ff012f36728f75f8e4f9e16dcba" Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.425148 4820 scope.go:117] "RemoveContainer" containerID="596d3edc3a3351d7e0aac3afe80fb2c0ef4ab2d2ff15d4150070a30d7175d0e0" Feb 03 13:51:29 crc kubenswrapper[4820]: E0203 13:51:29.430174 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"596d3edc3a3351d7e0aac3afe80fb2c0ef4ab2d2ff15d4150070a30d7175d0e0\": container with ID starting with 596d3edc3a3351d7e0aac3afe80fb2c0ef4ab2d2ff15d4150070a30d7175d0e0 not found: ID does not exist" containerID="596d3edc3a3351d7e0aac3afe80fb2c0ef4ab2d2ff15d4150070a30d7175d0e0" Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.430234 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596d3edc3a3351d7e0aac3afe80fb2c0ef4ab2d2ff15d4150070a30d7175d0e0"} err="failed to get container status \"596d3edc3a3351d7e0aac3afe80fb2c0ef4ab2d2ff15d4150070a30d7175d0e0\": rpc error: code = NotFound desc = could not find container \"596d3edc3a3351d7e0aac3afe80fb2c0ef4ab2d2ff15d4150070a30d7175d0e0\": container with ID starting with 596d3edc3a3351d7e0aac3afe80fb2c0ef4ab2d2ff15d4150070a30d7175d0e0 not found: ID does not exist" Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.430272 4820 scope.go:117] "RemoveContainer" containerID="5f232e8be0cfd7d88ec98ea589c642b5c6eb74b3cb7dbe9d93013d95c0469658" Feb 03 13:51:29 crc kubenswrapper[4820]: E0203 13:51:29.431333 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f232e8be0cfd7d88ec98ea589c642b5c6eb74b3cb7dbe9d93013d95c0469658\": container with ID starting with 5f232e8be0cfd7d88ec98ea589c642b5c6eb74b3cb7dbe9d93013d95c0469658 not found: ID does not exist" containerID="5f232e8be0cfd7d88ec98ea589c642b5c6eb74b3cb7dbe9d93013d95c0469658" Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.431360 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f232e8be0cfd7d88ec98ea589c642b5c6eb74b3cb7dbe9d93013d95c0469658"} err="failed to get container status \"5f232e8be0cfd7d88ec98ea589c642b5c6eb74b3cb7dbe9d93013d95c0469658\": rpc error: code = NotFound desc = could not find container \"5f232e8be0cfd7d88ec98ea589c642b5c6eb74b3cb7dbe9d93013d95c0469658\": container with ID starting with 5f232e8be0cfd7d88ec98ea589c642b5c6eb74b3cb7dbe9d93013d95c0469658 not found: ID does not exist" Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.431380 4820 scope.go:117] "RemoveContainer" containerID="4e270cf05f07582a1dc0ff518f6623716adc9ff012f36728f75f8e4f9e16dcba" Feb 03 13:51:29 crc kubenswrapper[4820]: E0203 13:51:29.431732 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e270cf05f07582a1dc0ff518f6623716adc9ff012f36728f75f8e4f9e16dcba\": container with ID starting with 4e270cf05f07582a1dc0ff518f6623716adc9ff012f36728f75f8e4f9e16dcba not found: ID does not exist" containerID="4e270cf05f07582a1dc0ff518f6623716adc9ff012f36728f75f8e4f9e16dcba" Feb 03 13:51:29 crc kubenswrapper[4820]: I0203 13:51:29.431754 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e270cf05f07582a1dc0ff518f6623716adc9ff012f36728f75f8e4f9e16dcba"} err="failed to get container status \"4e270cf05f07582a1dc0ff518f6623716adc9ff012f36728f75f8e4f9e16dcba\": rpc error: code = NotFound desc = could not find container \"4e270cf05f07582a1dc0ff518f6623716adc9ff012f36728f75f8e4f9e16dcba\": container with ID starting with 4e270cf05f07582a1dc0ff518f6623716adc9ff012f36728f75f8e4f9e16dcba not found: ID does not exist" Feb 03 13:51:31 crc kubenswrapper[4820]: I0203 13:51:31.156958 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b89c4015-df51-4565-8c25-e9b7a34d1b4a" path="/var/lib/kubelet/pods/b89c4015-df51-4565-8c25-e9b7a34d1b4a/volumes" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.474735 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vgdnv/must-gather-s56gm"] Feb 03 13:51:45 crc kubenswrapper[4820]: E0203 13:51:45.476054 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b89c4015-df51-4565-8c25-e9b7a34d1b4a" containerName="extract-content" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.476081 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b89c4015-df51-4565-8c25-e9b7a34d1b4a" containerName="extract-content" Feb 03 13:51:45 crc kubenswrapper[4820]: E0203 13:51:45.476110 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b89c4015-df51-4565-8c25-e9b7a34d1b4a" containerName="registry-server" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.476121 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b89c4015-df51-4565-8c25-e9b7a34d1b4a" containerName="registry-server" Feb 03 13:51:45 crc kubenswrapper[4820]: E0203 13:51:45.476159 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b89c4015-df51-4565-8c25-e9b7a34d1b4a" containerName="extract-utilities" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.476171 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b89c4015-df51-4565-8c25-e9b7a34d1b4a" containerName="extract-utilities" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.476525 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b89c4015-df51-4565-8c25-e9b7a34d1b4a" containerName="registry-server" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.478315 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/must-gather-s56gm" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.487800 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vgdnv"/"default-dockercfg-qtrsc" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.488170 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vgdnv"/"openshift-service-ca.crt" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.490829 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vgdnv/must-gather-s56gm"] Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.492510 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vgdnv"/"kube-root-ca.crt" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.623227 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4kbt\" (UniqueName: \"kubernetes.io/projected/677c2b79-9984-4d3d-9aab-c3e3ff13315c-kube-api-access-c4kbt\") pod \"must-gather-s56gm\" (UID: \"677c2b79-9984-4d3d-9aab-c3e3ff13315c\") " pod="openshift-must-gather-vgdnv/must-gather-s56gm" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.623572 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/677c2b79-9984-4d3d-9aab-c3e3ff13315c-must-gather-output\") pod \"must-gather-s56gm\" (UID: \"677c2b79-9984-4d3d-9aab-c3e3ff13315c\") " pod="openshift-must-gather-vgdnv/must-gather-s56gm" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.725279 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4kbt\" (UniqueName: \"kubernetes.io/projected/677c2b79-9984-4d3d-9aab-c3e3ff13315c-kube-api-access-c4kbt\") pod \"must-gather-s56gm\" (UID: \"677c2b79-9984-4d3d-9aab-c3e3ff13315c\") " pod="openshift-must-gather-vgdnv/must-gather-s56gm" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.725390 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/677c2b79-9984-4d3d-9aab-c3e3ff13315c-must-gather-output\") pod \"must-gather-s56gm\" (UID: \"677c2b79-9984-4d3d-9aab-c3e3ff13315c\") " pod="openshift-must-gather-vgdnv/must-gather-s56gm" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.725867 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/677c2b79-9984-4d3d-9aab-c3e3ff13315c-must-gather-output\") pod \"must-gather-s56gm\" (UID: \"677c2b79-9984-4d3d-9aab-c3e3ff13315c\") " pod="openshift-must-gather-vgdnv/must-gather-s56gm" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.753585 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4kbt\" (UniqueName: \"kubernetes.io/projected/677c2b79-9984-4d3d-9aab-c3e3ff13315c-kube-api-access-c4kbt\") pod \"must-gather-s56gm\" (UID: \"677c2b79-9984-4d3d-9aab-c3e3ff13315c\") " pod="openshift-must-gather-vgdnv/must-gather-s56gm" Feb 03 13:51:45 crc kubenswrapper[4820]: I0203 13:51:45.805223 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/must-gather-s56gm" Feb 03 13:51:46 crc kubenswrapper[4820]: I0203 13:51:46.674818 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vgdnv/must-gather-s56gm"] Feb 03 13:51:47 crc kubenswrapper[4820]: I0203 13:51:47.699216 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vgdnv/must-gather-s56gm" event={"ID":"677c2b79-9984-4d3d-9aab-c3e3ff13315c","Type":"ContainerStarted","Data":"70c59f88be143430e508a1485dcb979f5d71d43facd6b79068076742020866c6"} Feb 03 13:51:47 crc kubenswrapper[4820]: I0203 13:51:47.699481 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vgdnv/must-gather-s56gm" event={"ID":"677c2b79-9984-4d3d-9aab-c3e3ff13315c","Type":"ContainerStarted","Data":"76d44b5a441b5e6ad1372b85b4d76e535308015895ca9c0db156f39b6e498902"} Feb 03 13:51:47 crc kubenswrapper[4820]: I0203 13:51:47.699494 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vgdnv/must-gather-s56gm" event={"ID":"677c2b79-9984-4d3d-9aab-c3e3ff13315c","Type":"ContainerStarted","Data":"d0c4149891df5bb3f3cf9edc85a96a1277dcdb5decee52dcd6c4d80113efc05c"} Feb 03 13:51:47 crc kubenswrapper[4820]: I0203 13:51:47.723841 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vgdnv/must-gather-s56gm" podStartSLOduration=2.7238240559999998 podStartE2EDuration="2.723824056s" podCreationTimestamp="2026-02-03 13:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:51:47.723455797 +0000 UTC m=+6425.246531671" watchObservedRunningTime="2026-02-03 13:51:47.723824056 +0000 UTC m=+6425.246899920" Feb 03 13:51:51 crc kubenswrapper[4820]: I0203 13:51:51.361505 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vgdnv/crc-debug-9l4ps"] Feb 03 13:51:51 crc kubenswrapper[4820]: I0203 13:51:51.363470 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/crc-debug-9l4ps" Feb 03 13:51:51 crc kubenswrapper[4820]: I0203 13:51:51.397310 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q45r\" (UniqueName: \"kubernetes.io/projected/e23efd79-ed6d-434d-8927-8350aadc17b9-kube-api-access-2q45r\") pod \"crc-debug-9l4ps\" (UID: \"e23efd79-ed6d-434d-8927-8350aadc17b9\") " pod="openshift-must-gather-vgdnv/crc-debug-9l4ps" Feb 03 13:51:51 crc kubenswrapper[4820]: I0203 13:51:51.397477 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e23efd79-ed6d-434d-8927-8350aadc17b9-host\") pod \"crc-debug-9l4ps\" (UID: \"e23efd79-ed6d-434d-8927-8350aadc17b9\") " pod="openshift-must-gather-vgdnv/crc-debug-9l4ps" Feb 03 13:51:51 crc kubenswrapper[4820]: I0203 13:51:51.771598 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q45r\" (UniqueName: \"kubernetes.io/projected/e23efd79-ed6d-434d-8927-8350aadc17b9-kube-api-access-2q45r\") pod \"crc-debug-9l4ps\" (UID: \"e23efd79-ed6d-434d-8927-8350aadc17b9\") " pod="openshift-must-gather-vgdnv/crc-debug-9l4ps" Feb 03 13:51:51 crc kubenswrapper[4820]: I0203 13:51:51.771750 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e23efd79-ed6d-434d-8927-8350aadc17b9-host\") pod \"crc-debug-9l4ps\" (UID: \"e23efd79-ed6d-434d-8927-8350aadc17b9\") " pod="openshift-must-gather-vgdnv/crc-debug-9l4ps" Feb 03 13:51:51 crc kubenswrapper[4820]: I0203 13:51:51.772131 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e23efd79-ed6d-434d-8927-8350aadc17b9-host\") pod \"crc-debug-9l4ps\" (UID: \"e23efd79-ed6d-434d-8927-8350aadc17b9\") " pod="openshift-must-gather-vgdnv/crc-debug-9l4ps" Feb 03 13:51:51 crc kubenswrapper[4820]: I0203 13:51:51.816689 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q45r\" (UniqueName: \"kubernetes.io/projected/e23efd79-ed6d-434d-8927-8350aadc17b9-kube-api-access-2q45r\") pod \"crc-debug-9l4ps\" (UID: \"e23efd79-ed6d-434d-8927-8350aadc17b9\") " pod="openshift-must-gather-vgdnv/crc-debug-9l4ps" Feb 03 13:51:51 crc kubenswrapper[4820]: I0203 13:51:51.979605 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/crc-debug-9l4ps" Feb 03 13:51:52 crc kubenswrapper[4820]: W0203 13:51:52.016029 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode23efd79_ed6d_434d_8927_8350aadc17b9.slice/crio-9ff09b1862bfda9141ae3394f9e6a976e45c6dd8857c0fedb17270526708b7e8 WatchSource:0}: Error finding container 9ff09b1862bfda9141ae3394f9e6a976e45c6dd8857c0fedb17270526708b7e8: Status 404 returned error can't find the container with id 9ff09b1862bfda9141ae3394f9e6a976e45c6dd8857c0fedb17270526708b7e8 Feb 03 13:51:52 crc kubenswrapper[4820]: I0203 13:51:52.818990 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vgdnv/crc-debug-9l4ps" event={"ID":"e23efd79-ed6d-434d-8927-8350aadc17b9","Type":"ContainerStarted","Data":"011fa2bfd31d817d0919efde45c02a41cf142a5e68dd7149c416b53658b739f0"} Feb 03 13:51:52 crc kubenswrapper[4820]: I0203 13:51:52.819302 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vgdnv/crc-debug-9l4ps" event={"ID":"e23efd79-ed6d-434d-8927-8350aadc17b9","Type":"ContainerStarted","Data":"9ff09b1862bfda9141ae3394f9e6a976e45c6dd8857c0fedb17270526708b7e8"} Feb 03 13:51:52 crc kubenswrapper[4820]: I0203 13:51:52.850721 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vgdnv/crc-debug-9l4ps" podStartSLOduration=1.850689092 podStartE2EDuration="1.850689092s" podCreationTimestamp="2026-02-03 13:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:51:52.841234576 +0000 UTC m=+6430.364310450" watchObservedRunningTime="2026-02-03 13:51:52.850689092 +0000 UTC m=+6430.373764956" Feb 03 13:52:01 crc kubenswrapper[4820]: I0203 13:52:01.365427 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:52:01 crc kubenswrapper[4820]: I0203 13:52:01.366054 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:52:31 crc kubenswrapper[4820]: I0203 13:52:31.365854 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:52:31 crc kubenswrapper[4820]: I0203 13:52:31.366320 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:52:41 crc kubenswrapper[4820]: I0203 13:52:41.749464 4820 generic.go:334] "Generic (PLEG): container finished" podID="e23efd79-ed6d-434d-8927-8350aadc17b9" containerID="011fa2bfd31d817d0919efde45c02a41cf142a5e68dd7149c416b53658b739f0" exitCode=0 Feb 03 13:52:41 crc kubenswrapper[4820]: I0203 13:52:41.749546 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vgdnv/crc-debug-9l4ps" event={"ID":"e23efd79-ed6d-434d-8927-8350aadc17b9","Type":"ContainerDied","Data":"011fa2bfd31d817d0919efde45c02a41cf142a5e68dd7149c416b53658b739f0"} Feb 03 13:52:42 crc kubenswrapper[4820]: I0203 13:52:42.899158 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/crc-debug-9l4ps" Feb 03 13:52:43 crc kubenswrapper[4820]: I0203 13:52:43.013978 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vgdnv/crc-debug-9l4ps"] Feb 03 13:52:43 crc kubenswrapper[4820]: I0203 13:52:43.045236 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vgdnv/crc-debug-9l4ps"] Feb 03 13:52:43 crc kubenswrapper[4820]: I0203 13:52:43.045865 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q45r\" (UniqueName: \"kubernetes.io/projected/e23efd79-ed6d-434d-8927-8350aadc17b9-kube-api-access-2q45r\") pod \"e23efd79-ed6d-434d-8927-8350aadc17b9\" (UID: \"e23efd79-ed6d-434d-8927-8350aadc17b9\") " Feb 03 13:52:43 crc kubenswrapper[4820]: I0203 13:52:43.046084 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e23efd79-ed6d-434d-8927-8350aadc17b9-host\") pod \"e23efd79-ed6d-434d-8927-8350aadc17b9\" (UID: \"e23efd79-ed6d-434d-8927-8350aadc17b9\") " Feb 03 13:52:43 crc kubenswrapper[4820]: I0203 13:52:43.046554 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e23efd79-ed6d-434d-8927-8350aadc17b9-host" (OuterVolumeSpecName: "host") pod "e23efd79-ed6d-434d-8927-8350aadc17b9" (UID: "e23efd79-ed6d-434d-8927-8350aadc17b9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:52:43 crc kubenswrapper[4820]: I0203 13:52:43.056414 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23efd79-ed6d-434d-8927-8350aadc17b9-kube-api-access-2q45r" (OuterVolumeSpecName: "kube-api-access-2q45r") pod "e23efd79-ed6d-434d-8927-8350aadc17b9" (UID: "e23efd79-ed6d-434d-8927-8350aadc17b9"). InnerVolumeSpecName "kube-api-access-2q45r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:52:43 crc kubenswrapper[4820]: I0203 13:52:43.153734 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e23efd79-ed6d-434d-8927-8350aadc17b9-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:52:43 crc kubenswrapper[4820]: I0203 13:52:43.153790 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q45r\" (UniqueName: \"kubernetes.io/projected/e23efd79-ed6d-434d-8927-8350aadc17b9-kube-api-access-2q45r\") on node \"crc\" DevicePath \"\"" Feb 03 13:52:43 crc kubenswrapper[4820]: I0203 13:52:43.193383 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e23efd79-ed6d-434d-8927-8350aadc17b9" path="/var/lib/kubelet/pods/e23efd79-ed6d-434d-8927-8350aadc17b9/volumes" Feb 03 13:52:43 crc kubenswrapper[4820]: I0203 13:52:43.773416 4820 scope.go:117] "RemoveContainer" containerID="011fa2bfd31d817d0919efde45c02a41cf142a5e68dd7149c416b53658b739f0" Feb 03 13:52:43 crc kubenswrapper[4820]: I0203 13:52:43.773446 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/crc-debug-9l4ps" Feb 03 13:52:44 crc kubenswrapper[4820]: I0203 13:52:44.434299 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vgdnv/crc-debug-cdc4j"] Feb 03 13:52:44 crc kubenswrapper[4820]: E0203 13:52:44.436677 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23efd79-ed6d-434d-8927-8350aadc17b9" containerName="container-00" Feb 03 13:52:44 crc kubenswrapper[4820]: I0203 13:52:44.436720 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23efd79-ed6d-434d-8927-8350aadc17b9" containerName="container-00" Feb 03 13:52:44 crc kubenswrapper[4820]: I0203 13:52:44.436990 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23efd79-ed6d-434d-8927-8350aadc17b9" containerName="container-00" Feb 03 13:52:44 crc kubenswrapper[4820]: I0203 13:52:44.437830 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/crc-debug-cdc4j" Feb 03 13:52:44 crc kubenswrapper[4820]: I0203 13:52:44.485109 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pvjc\" (UniqueName: \"kubernetes.io/projected/78625105-a94b-443c-92ab-54051758239c-kube-api-access-8pvjc\") pod \"crc-debug-cdc4j\" (UID: \"78625105-a94b-443c-92ab-54051758239c\") " pod="openshift-must-gather-vgdnv/crc-debug-cdc4j" Feb 03 13:52:44 crc kubenswrapper[4820]: I0203 13:52:44.485169 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78625105-a94b-443c-92ab-54051758239c-host\") pod \"crc-debug-cdc4j\" (UID: \"78625105-a94b-443c-92ab-54051758239c\") " pod="openshift-must-gather-vgdnv/crc-debug-cdc4j" Feb 03 13:52:44 crc kubenswrapper[4820]: I0203 13:52:44.587176 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pvjc\" (UniqueName: \"kubernetes.io/projected/78625105-a94b-443c-92ab-54051758239c-kube-api-access-8pvjc\") pod \"crc-debug-cdc4j\" (UID: \"78625105-a94b-443c-92ab-54051758239c\") " pod="openshift-must-gather-vgdnv/crc-debug-cdc4j" Feb 03 13:52:44 crc kubenswrapper[4820]: I0203 13:52:44.587260 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78625105-a94b-443c-92ab-54051758239c-host\") pod \"crc-debug-cdc4j\" (UID: \"78625105-a94b-443c-92ab-54051758239c\") " pod="openshift-must-gather-vgdnv/crc-debug-cdc4j" Feb 03 13:52:44 crc kubenswrapper[4820]: I0203 13:52:44.587448 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78625105-a94b-443c-92ab-54051758239c-host\") pod \"crc-debug-cdc4j\" (UID: \"78625105-a94b-443c-92ab-54051758239c\") " pod="openshift-must-gather-vgdnv/crc-debug-cdc4j" Feb 03 13:52:44 crc kubenswrapper[4820]: I0203 13:52:44.628323 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pvjc\" (UniqueName: \"kubernetes.io/projected/78625105-a94b-443c-92ab-54051758239c-kube-api-access-8pvjc\") pod \"crc-debug-cdc4j\" (UID: \"78625105-a94b-443c-92ab-54051758239c\") " pod="openshift-must-gather-vgdnv/crc-debug-cdc4j" Feb 03 13:52:44 crc kubenswrapper[4820]: I0203 13:52:44.760628 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/crc-debug-cdc4j" Feb 03 13:52:44 crc kubenswrapper[4820]: W0203 13:52:44.807173 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78625105_a94b_443c_92ab_54051758239c.slice/crio-bad6b1b113cf93b76232e0177167f1ee5928341b8299695ceae9b25589af7b52 WatchSource:0}: Error finding container bad6b1b113cf93b76232e0177167f1ee5928341b8299695ceae9b25589af7b52: Status 404 returned error can't find the container with id bad6b1b113cf93b76232e0177167f1ee5928341b8299695ceae9b25589af7b52 Feb 03 13:52:45 crc kubenswrapper[4820]: I0203 13:52:45.808869 4820 generic.go:334] "Generic (PLEG): container finished" podID="78625105-a94b-443c-92ab-54051758239c" containerID="e09604d34b209087d3fe0514d90a91f331d4bfeb0c16f58ada6c3aa67602c553" exitCode=0 Feb 03 13:52:45 crc kubenswrapper[4820]: I0203 13:52:45.808937 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vgdnv/crc-debug-cdc4j" event={"ID":"78625105-a94b-443c-92ab-54051758239c","Type":"ContainerDied","Data":"e09604d34b209087d3fe0514d90a91f331d4bfeb0c16f58ada6c3aa67602c553"} Feb 03 13:52:45 crc kubenswrapper[4820]: I0203 13:52:45.809531 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vgdnv/crc-debug-cdc4j" event={"ID":"78625105-a94b-443c-92ab-54051758239c","Type":"ContainerStarted","Data":"bad6b1b113cf93b76232e0177167f1ee5928341b8299695ceae9b25589af7b52"} Feb 03 13:52:46 crc kubenswrapper[4820]: I0203 13:52:46.947268 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/crc-debug-cdc4j" Feb 03 13:52:47 crc kubenswrapper[4820]: I0203 13:52:47.103984 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78625105-a94b-443c-92ab-54051758239c-host\") pod \"78625105-a94b-443c-92ab-54051758239c\" (UID: \"78625105-a94b-443c-92ab-54051758239c\") " Feb 03 13:52:47 crc kubenswrapper[4820]: I0203 13:52:47.104357 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pvjc\" (UniqueName: \"kubernetes.io/projected/78625105-a94b-443c-92ab-54051758239c-kube-api-access-8pvjc\") pod \"78625105-a94b-443c-92ab-54051758239c\" (UID: \"78625105-a94b-443c-92ab-54051758239c\") " Feb 03 13:52:47 crc kubenswrapper[4820]: I0203 13:52:47.104771 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78625105-a94b-443c-92ab-54051758239c-host" (OuterVolumeSpecName: "host") pod "78625105-a94b-443c-92ab-54051758239c" (UID: "78625105-a94b-443c-92ab-54051758239c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:52:47 crc kubenswrapper[4820]: I0203 13:52:47.105079 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78625105-a94b-443c-92ab-54051758239c-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:52:47 crc kubenswrapper[4820]: I0203 13:52:47.113213 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78625105-a94b-443c-92ab-54051758239c-kube-api-access-8pvjc" (OuterVolumeSpecName: "kube-api-access-8pvjc") pod "78625105-a94b-443c-92ab-54051758239c" (UID: "78625105-a94b-443c-92ab-54051758239c"). InnerVolumeSpecName "kube-api-access-8pvjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:52:47 crc kubenswrapper[4820]: I0203 13:52:47.207934 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pvjc\" (UniqueName: \"kubernetes.io/projected/78625105-a94b-443c-92ab-54051758239c-kube-api-access-8pvjc\") on node \"crc\" DevicePath \"\"" Feb 03 13:52:47 crc kubenswrapper[4820]: I0203 13:52:47.843518 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/crc-debug-cdc4j" Feb 03 13:52:47 crc kubenswrapper[4820]: I0203 13:52:47.843826 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vgdnv/crc-debug-cdc4j" event={"ID":"78625105-a94b-443c-92ab-54051758239c","Type":"ContainerDied","Data":"bad6b1b113cf93b76232e0177167f1ee5928341b8299695ceae9b25589af7b52"} Feb 03 13:52:47 crc kubenswrapper[4820]: I0203 13:52:47.843905 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bad6b1b113cf93b76232e0177167f1ee5928341b8299695ceae9b25589af7b52" Feb 03 13:52:48 crc kubenswrapper[4820]: I0203 13:52:48.237627 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vgdnv/crc-debug-cdc4j"] Feb 03 13:52:48 crc kubenswrapper[4820]: I0203 13:52:48.250667 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vgdnv/crc-debug-cdc4j"] Feb 03 13:52:49 crc kubenswrapper[4820]: I0203 13:52:49.157199 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78625105-a94b-443c-92ab-54051758239c" path="/var/lib/kubelet/pods/78625105-a94b-443c-92ab-54051758239c/volumes" Feb 03 13:52:49 crc kubenswrapper[4820]: I0203 13:52:49.473331 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vgdnv/crc-debug-ck49p"] Feb 03 13:52:49 crc kubenswrapper[4820]: E0203 13:52:49.476177 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78625105-a94b-443c-92ab-54051758239c" containerName="container-00" Feb 03 13:52:49 crc kubenswrapper[4820]: I0203 13:52:49.476232 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="78625105-a94b-443c-92ab-54051758239c" containerName="container-00" Feb 03 13:52:49 crc kubenswrapper[4820]: I0203 13:52:49.477356 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="78625105-a94b-443c-92ab-54051758239c" containerName="container-00" Feb 03 13:52:49 crc kubenswrapper[4820]: I0203 13:52:49.480828 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/crc-debug-ck49p" Feb 03 13:52:49 crc kubenswrapper[4820]: I0203 13:52:49.607360 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/742a64da-fbfa-4ef8-9c68-6407a2f8d43c-host\") pod \"crc-debug-ck49p\" (UID: \"742a64da-fbfa-4ef8-9c68-6407a2f8d43c\") " pod="openshift-must-gather-vgdnv/crc-debug-ck49p" Feb 03 13:52:49 crc kubenswrapper[4820]: I0203 13:52:49.607435 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smcqd\" (UniqueName: \"kubernetes.io/projected/742a64da-fbfa-4ef8-9c68-6407a2f8d43c-kube-api-access-smcqd\") pod \"crc-debug-ck49p\" (UID: \"742a64da-fbfa-4ef8-9c68-6407a2f8d43c\") " pod="openshift-must-gather-vgdnv/crc-debug-ck49p" Feb 03 13:52:49 crc kubenswrapper[4820]: I0203 13:52:49.709880 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/742a64da-fbfa-4ef8-9c68-6407a2f8d43c-host\") pod \"crc-debug-ck49p\" (UID: \"742a64da-fbfa-4ef8-9c68-6407a2f8d43c\") " pod="openshift-must-gather-vgdnv/crc-debug-ck49p" Feb 03 13:52:49 crc kubenswrapper[4820]: I0203 13:52:49.709981 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smcqd\" (UniqueName: \"kubernetes.io/projected/742a64da-fbfa-4ef8-9c68-6407a2f8d43c-kube-api-access-smcqd\") pod \"crc-debug-ck49p\" (UID: \"742a64da-fbfa-4ef8-9c68-6407a2f8d43c\") " pod="openshift-must-gather-vgdnv/crc-debug-ck49p" Feb 03 13:52:49 crc kubenswrapper[4820]: I0203 13:52:49.710055 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/742a64da-fbfa-4ef8-9c68-6407a2f8d43c-host\") pod \"crc-debug-ck49p\" (UID: \"742a64da-fbfa-4ef8-9c68-6407a2f8d43c\") " pod="openshift-must-gather-vgdnv/crc-debug-ck49p" Feb 03 13:52:49 crc kubenswrapper[4820]: I0203 13:52:49.741730 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smcqd\" (UniqueName: \"kubernetes.io/projected/742a64da-fbfa-4ef8-9c68-6407a2f8d43c-kube-api-access-smcqd\") pod \"crc-debug-ck49p\" (UID: \"742a64da-fbfa-4ef8-9c68-6407a2f8d43c\") " pod="openshift-must-gather-vgdnv/crc-debug-ck49p" Feb 03 13:52:49 crc kubenswrapper[4820]: I0203 13:52:49.924601 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/crc-debug-ck49p" Feb 03 13:52:50 crc kubenswrapper[4820]: I0203 13:52:50.983177 4820 generic.go:334] "Generic (PLEG): container finished" podID="742a64da-fbfa-4ef8-9c68-6407a2f8d43c" containerID="7ac391b432e9efa37676525c73d22df71d2e1f152296baf3c184829ef12ff629" exitCode=0 Feb 03 13:52:50 crc kubenswrapper[4820]: I0203 13:52:50.983517 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vgdnv/crc-debug-ck49p" event={"ID":"742a64da-fbfa-4ef8-9c68-6407a2f8d43c","Type":"ContainerDied","Data":"7ac391b432e9efa37676525c73d22df71d2e1f152296baf3c184829ef12ff629"} Feb 03 13:52:50 crc kubenswrapper[4820]: I0203 13:52:50.983552 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vgdnv/crc-debug-ck49p" event={"ID":"742a64da-fbfa-4ef8-9c68-6407a2f8d43c","Type":"ContainerStarted","Data":"a6c9c7c49afe4974f2246e707e0d009e9b3d92ec79680c1b93ab370ddfecffcc"} Feb 03 13:52:51 crc kubenswrapper[4820]: I0203 13:52:51.087233 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vgdnv/crc-debug-ck49p"] Feb 03 13:52:51 crc kubenswrapper[4820]: I0203 13:52:51.099383 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vgdnv/crc-debug-ck49p"] Feb 03 13:52:52 crc kubenswrapper[4820]: I0203 13:52:52.125568 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/crc-debug-ck49p" Feb 03 13:52:52 crc kubenswrapper[4820]: I0203 13:52:52.275806 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/742a64da-fbfa-4ef8-9c68-6407a2f8d43c-host\") pod \"742a64da-fbfa-4ef8-9c68-6407a2f8d43c\" (UID: \"742a64da-fbfa-4ef8-9c68-6407a2f8d43c\") " Feb 03 13:52:52 crc kubenswrapper[4820]: I0203 13:52:52.275906 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742a64da-fbfa-4ef8-9c68-6407a2f8d43c-host" (OuterVolumeSpecName: "host") pod "742a64da-fbfa-4ef8-9c68-6407a2f8d43c" (UID: "742a64da-fbfa-4ef8-9c68-6407a2f8d43c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:52:52 crc kubenswrapper[4820]: I0203 13:52:52.276062 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smcqd\" (UniqueName: \"kubernetes.io/projected/742a64da-fbfa-4ef8-9c68-6407a2f8d43c-kube-api-access-smcqd\") pod \"742a64da-fbfa-4ef8-9c68-6407a2f8d43c\" (UID: \"742a64da-fbfa-4ef8-9c68-6407a2f8d43c\") " Feb 03 13:52:52 crc kubenswrapper[4820]: I0203 13:52:52.276803 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/742a64da-fbfa-4ef8-9c68-6407a2f8d43c-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:52:52 crc kubenswrapper[4820]: I0203 13:52:52.284263 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/742a64da-fbfa-4ef8-9c68-6407a2f8d43c-kube-api-access-smcqd" (OuterVolumeSpecName: "kube-api-access-smcqd") pod "742a64da-fbfa-4ef8-9c68-6407a2f8d43c" (UID: "742a64da-fbfa-4ef8-9c68-6407a2f8d43c"). InnerVolumeSpecName "kube-api-access-smcqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:52:52 crc kubenswrapper[4820]: I0203 13:52:52.575577 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smcqd\" (UniqueName: \"kubernetes.io/projected/742a64da-fbfa-4ef8-9c68-6407a2f8d43c-kube-api-access-smcqd\") on node \"crc\" DevicePath \"\"" Feb 03 13:52:53 crc kubenswrapper[4820]: I0203 13:52:53.005233 4820 scope.go:117] "RemoveContainer" containerID="7ac391b432e9efa37676525c73d22df71d2e1f152296baf3c184829ef12ff629" Feb 03 13:52:53 crc kubenswrapper[4820]: I0203 13:52:53.005262 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/crc-debug-ck49p" Feb 03 13:52:53 crc kubenswrapper[4820]: I0203 13:52:53.162158 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="742a64da-fbfa-4ef8-9c68-6407a2f8d43c" path="/var/lib/kubelet/pods/742a64da-fbfa-4ef8-9c68-6407a2f8d43c/volumes" Feb 03 13:53:01 crc kubenswrapper[4820]: I0203 13:53:01.365332 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:53:01 crc kubenswrapper[4820]: I0203 13:53:01.365915 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:53:01 crc kubenswrapper[4820]: I0203 13:53:01.365981 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 13:53:01 crc kubenswrapper[4820]: I0203 13:53:01.367243 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:53:01 crc kubenswrapper[4820]: I0203 13:53:01.367321 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" gracePeriod=600 Feb 03 13:53:01 crc kubenswrapper[4820]: E0203 13:53:01.531744 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:53:02 crc kubenswrapper[4820]: I0203 13:53:02.355798 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" exitCode=0 Feb 03 13:53:02 crc kubenswrapper[4820]: I0203 13:53:02.355875 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e"} Feb 03 13:53:02 crc kubenswrapper[4820]: I0203 13:53:02.355973 4820 scope.go:117] "RemoveContainer" containerID="ef180b5ad3083e7765ecf2a351c6b4db38618425b9e53c6f4487f37e71237324" Feb 03 13:53:02 crc kubenswrapper[4820]: I0203 13:53:02.356832 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:53:02 crc kubenswrapper[4820]: E0203 13:53:02.357275 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:53:14 crc kubenswrapper[4820]: I0203 13:53:14.151771 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:53:14 crc kubenswrapper[4820]: E0203 13:53:14.152523 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:53:29 crc kubenswrapper[4820]: I0203 13:53:29.143315 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:53:29 crc kubenswrapper[4820]: E0203 13:53:29.144315 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:53:43 crc kubenswrapper[4820]: I0203 13:53:43.150562 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:53:43 crc kubenswrapper[4820]: E0203 13:53:43.151310 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:53:47 crc kubenswrapper[4820]: I0203 13:53:47.503515 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-fdff74856-dfqrf_5229e26a-15af-47fd-bb4a-956968711984/barbican-api/0.log" Feb 03 13:53:47 crc kubenswrapper[4820]: I0203 13:53:47.684061 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-fdff74856-dfqrf_5229e26a-15af-47fd-bb4a-956968711984/barbican-api-log/0.log" Feb 03 13:53:47 crc kubenswrapper[4820]: I0203 13:53:47.709325 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-775b8c5454-c9g7t_86a0d38b-74e6-4528-9dae-af9c8400555d/barbican-keystone-listener/0.log" Feb 03 13:53:47 crc kubenswrapper[4820]: I0203 13:53:47.874858 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-775b8c5454-c9g7t_86a0d38b-74e6-4528-9dae-af9c8400555d/barbican-keystone-listener-log/0.log" Feb 03 13:53:47 crc kubenswrapper[4820]: I0203 13:53:47.958298 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-659d874887-6h95b_410ba29a-39b4-4468-837d-8b38a94d638d/barbican-worker/0.log" Feb 03 13:53:48 crc kubenswrapper[4820]: I0203 13:53:48.075026 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-659d874887-6h95b_410ba29a-39b4-4468-837d-8b38a94d638d/barbican-worker-log/0.log" Feb 03 13:53:48 crc kubenswrapper[4820]: I0203 13:53:48.216877 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-mvssl_24c4a250-4fa9-42c6-a3bd-e626d0adc807/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:53:48 crc kubenswrapper[4820]: I0203 13:53:48.373463 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fcf87510-64cf-492b-bd2c-560f6ddc0ee2/ceilometer-central-agent/0.log" Feb 03 13:53:48 crc kubenswrapper[4820]: I0203 13:53:48.447172 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fcf87510-64cf-492b-bd2c-560f6ddc0ee2/ceilometer-notification-agent/0.log" Feb 03 13:53:48 crc kubenswrapper[4820]: I0203 13:53:48.515904 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fcf87510-64cf-492b-bd2c-560f6ddc0ee2/proxy-httpd/0.log" Feb 03 13:53:48 crc kubenswrapper[4820]: I0203 13:53:48.593110 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fcf87510-64cf-492b-bd2c-560f6ddc0ee2/sg-core/0.log" Feb 03 13:53:48 crc kubenswrapper[4820]: I0203 13:53:48.734815 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_32b101cf-4d79-44f8-a591-dd5c74df5af6/cinder-api/0.log" Feb 03 13:53:48 crc kubenswrapper[4820]: I0203 13:53:48.819236 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_32b101cf-4d79-44f8-a591-dd5c74df5af6/cinder-api-log/0.log" Feb 03 13:53:49 crc kubenswrapper[4820]: I0203 13:53:49.137377 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2de9875d-8142-41a2-80b3-74a66ef53e07/probe/0.log" Feb 03 13:53:49 crc kubenswrapper[4820]: I0203 13:53:49.140082 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2de9875d-8142-41a2-80b3-74a66ef53e07/cinder-scheduler/0.log" Feb 03 13:53:49 crc kubenswrapper[4820]: I0203 13:53:49.152161 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-7c9gw_fc5454df-b4c1-45f5-9021-a70a13b47b37/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:53:49 crc kubenswrapper[4820]: I0203 13:53:49.369115 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-rl9z7_126074cf-7213-48ec-8909-5a8286bb11b6/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:53:49 crc kubenswrapper[4820]: I0203 13:53:49.413165 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-kz5f5_a42a2742-e704-482e-ac37-5c948277f576/init/0.log" Feb 03 13:53:49 crc kubenswrapper[4820]: I0203 13:53:49.570553 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-kz5f5_a42a2742-e704-482e-ac37-5c948277f576/init/0.log" Feb 03 13:53:49 crc kubenswrapper[4820]: I0203 13:53:49.763341 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-kz5f5_a42a2742-e704-482e-ac37-5c948277f576/dnsmasq-dns/0.log" Feb 03 13:53:49 crc kubenswrapper[4820]: I0203 13:53:49.776652 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hwmx7_c7b75829-d001-4e04-9850-44e986677f48/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:53:49 crc kubenswrapper[4820]: I0203 13:53:49.993937 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_51339dae-75ae-4857-853e-d4d0a0a1aa65/glance-httpd/0.log" Feb 03 13:53:50 crc kubenswrapper[4820]: I0203 13:53:50.032067 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_51339dae-75ae-4857-853e-d4d0a0a1aa65/glance-log/0.log" Feb 03 13:53:50 crc kubenswrapper[4820]: I0203 13:53:50.163697 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_227e62a0-37fd-4e52-ae44-df01b13d4b32/glance-httpd/0.log" Feb 03 13:53:50 crc kubenswrapper[4820]: I0203 13:53:50.189418 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_227e62a0-37fd-4e52-ae44-df01b13d4b32/glance-log/0.log" Feb 03 13:53:50 crc kubenswrapper[4820]: I0203 13:53:50.567460 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-68b4df5bdd-tdb9h_308562dd-6078-4c1c-a4e0-c01a60a2d81d/horizon/4.log" Feb 03 13:53:50 crc kubenswrapper[4820]: I0203 13:53:50.701038 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-68b4df5bdd-tdb9h_308562dd-6078-4c1c-a4e0-c01a60a2d81d/horizon/3.log" Feb 03 13:53:50 crc kubenswrapper[4820]: I0203 13:53:50.864028 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-kgbdv_27a58bb7-ce09-4c16-b190-071c1c506a14/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:53:51 crc kubenswrapper[4820]: I0203 13:53:51.123235 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-7qz9r_9311424c-1f4a-434d-8e8c-e5383453074c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:53:51 crc kubenswrapper[4820]: I0203 13:53:51.282241 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-68b4df5bdd-tdb9h_308562dd-6078-4c1c-a4e0-c01a60a2d81d/horizon-log/0.log" Feb 03 13:53:51 crc kubenswrapper[4820]: I0203 13:53:51.464526 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29502061-76zjl_fe4eea03-b3c4-427a-acc9-7b73142f1723/keystone-cron/0.log" Feb 03 13:53:51 crc kubenswrapper[4820]: I0203 13:53:51.725524 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_eb6e937f-acf9-4ee8-8ee9-c757535b3a53/kube-state-metrics/0.log" Feb 03 13:53:51 crc kubenswrapper[4820]: I0203 13:53:51.727829 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6ccd68b7f-9xjs9_c5d266f2-257d-4f06-9237-b34d67b51245/keystone-api/0.log" Feb 03 13:53:51 crc kubenswrapper[4820]: I0203 13:53:51.886373 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-mqwx5_772be0ab-717e-4a25-a481-95a4b1cd0c07/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:53:52 crc kubenswrapper[4820]: I0203 13:53:52.300315 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fw8jc_0c9770c6-0c7f-4195-99d7-a9f7074e0236/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:53:52 crc kubenswrapper[4820]: I0203 13:53:52.369753 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7f9964d55c-h2clw_aef62020-c58e-4de0-b1b3-10fdd2b8dc8d/neutron-httpd/0.log" Feb 03 13:53:52 crc kubenswrapper[4820]: I0203 13:53:52.405225 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7f9964d55c-h2clw_aef62020-c58e-4de0-b1b3-10fdd2b8dc8d/neutron-api/0.log" Feb 03 13:53:53 crc kubenswrapper[4820]: I0203 13:53:53.133837 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d1bc719a-a75c-4bf1-aaae-0e89d1ed34db/nova-cell0-conductor-conductor/0.log" Feb 03 13:53:53 crc kubenswrapper[4820]: I0203 13:53:53.345330 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c362e3ce-ca7f-443e-ab57-57f34e89e883/nova-cell1-conductor-conductor/0.log" Feb 03 13:53:53 crc kubenswrapper[4820]: I0203 13:53:53.751017 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_26398afc-04a6-4c1f-92bf-767a938debad/nova-api-log/0.log" Feb 03 13:53:53 crc kubenswrapper[4820]: I0203 13:53:53.783192 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_33bbf307-c8f9-402f-9b83-50d9d9b034c2/nova-cell1-novncproxy-novncproxy/0.log" Feb 03 13:53:54 crc kubenswrapper[4820]: I0203 13:53:54.277335 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-6pwfl_b390260e-6a1b-4020-95d5-c4275e4a6c4e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:53:54 crc kubenswrapper[4820]: I0203 13:53:54.322516 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b2a1328f-2e2d-47e6-b07c-d0b70643e1aa/nova-metadata-log/0.log" Feb 03 13:53:54 crc kubenswrapper[4820]: I0203 13:53:54.565651 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_26398afc-04a6-4c1f-92bf-767a938debad/nova-api-api/0.log" Feb 03 13:53:54 crc kubenswrapper[4820]: I0203 13:53:54.848620 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1e865214-494f-4a49-a2e6-2b7316f30a92/mysql-bootstrap/0.log" Feb 03 13:53:55 crc kubenswrapper[4820]: I0203 13:53:55.014661 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_dff15ab3-eace-455f-b413-0acd29aa3cb5/nova-scheduler-scheduler/0.log" Feb 03 13:53:55 crc kubenswrapper[4820]: I0203 13:53:55.085228 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1e865214-494f-4a49-a2e6-2b7316f30a92/mysql-bootstrap/0.log" Feb 03 13:53:55 crc kubenswrapper[4820]: I0203 13:53:55.165048 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1e865214-494f-4a49-a2e6-2b7316f30a92/galera/0.log" Feb 03 13:53:55 crc kubenswrapper[4820]: I0203 13:53:55.367601 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8e46f8a-5de0-457f-b8eb-f76e8902e8ab/mysql-bootstrap/0.log" Feb 03 13:53:55 crc kubenswrapper[4820]: I0203 13:53:55.565019 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8e46f8a-5de0-457f-b8eb-f76e8902e8ab/mysql-bootstrap/0.log" Feb 03 13:53:55 crc kubenswrapper[4820]: I0203 13:53:55.579953 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8e46f8a-5de0-457f-b8eb-f76e8902e8ab/galera/0.log" Feb 03 13:53:55 crc kubenswrapper[4820]: I0203 13:53:55.828278 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_bf76d2b6-6c3a-42e7-b813-63cfcd39bd0e/openstackclient/0.log" Feb 03 13:53:56 crc kubenswrapper[4820]: I0203 13:53:56.140807 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-96p5d_b3b01895-53e1-4391-8d1e-8f2458d4f2e0/ovn-controller/0.log" Feb 03 13:53:56 crc kubenswrapper[4820]: I0203 13:53:56.353123 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lrcd2_1a16d012-2c9a-452a-9a18-8d016793a7f6/openstack-network-exporter/0.log" Feb 03 13:53:56 crc kubenswrapper[4820]: I0203 13:53:56.546295 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kk5zn_7fd50209-6464-4ba1-a7f9-ff9a38317ff2/ovsdb-server-init/0.log" Feb 03 13:53:56 crc kubenswrapper[4820]: I0203 13:53:56.761226 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kk5zn_7fd50209-6464-4ba1-a7f9-ff9a38317ff2/ovs-vswitchd/0.log" Feb 03 13:53:56 crc kubenswrapper[4820]: I0203 13:53:56.798015 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kk5zn_7fd50209-6464-4ba1-a7f9-ff9a38317ff2/ovsdb-server/0.log" Feb 03 13:53:58 crc kubenswrapper[4820]: I0203 13:53:58.142464 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:53:58 crc kubenswrapper[4820]: E0203 13:53:58.143058 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:53:58 crc kubenswrapper[4820]: I0203 13:53:58.422315 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kk5zn_7fd50209-6464-4ba1-a7f9-ff9a38317ff2/ovsdb-server-init/0.log" Feb 03 13:53:58 crc kubenswrapper[4820]: I0203 13:53:58.540512 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-hx7dn_ffae89cd-1189-4722-8b80-6bf2a67f5dde/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:53:58 crc kubenswrapper[4820]: I0203 13:53:58.712099 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b2a1328f-2e2d-47e6-b07c-d0b70643e1aa/nova-metadata-metadata/0.log" Feb 03 13:53:58 crc kubenswrapper[4820]: I0203 13:53:58.726086 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d248d6d6-d6ff-415a-9ea6-d65cde5ad964/openstack-network-exporter/0.log" Feb 03 13:53:58 crc kubenswrapper[4820]: I0203 13:53:58.821674 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d248d6d6-d6ff-415a-9ea6-d65cde5ad964/ovn-northd/0.log" Feb 03 13:53:58 crc kubenswrapper[4820]: I0203 13:53:58.988397 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f936af63-a86d-4dc6-aa17-59e2e2b69f5b/openstack-network-exporter/0.log" Feb 03 13:53:59 crc kubenswrapper[4820]: I0203 13:53:59.044550 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f936af63-a86d-4dc6-aa17-59e2e2b69f5b/ovsdbserver-nb/0.log" Feb 03 13:53:59 crc kubenswrapper[4820]: I0203 13:53:59.268791 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9c7327b-374e-4a6f-a5c7-23136aea36b8/ovsdbserver-sb/0.log" Feb 03 13:53:59 crc kubenswrapper[4820]: I0203 13:53:59.328292 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9c7327b-374e-4a6f-a5c7-23136aea36b8/openstack-network-exporter/0.log" Feb 03 13:53:59 crc kubenswrapper[4820]: I0203 13:53:59.589088 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-656b464f74-h7xjt_43ecc5a4-8bd1-435c-8514-de23a493ee45/placement-api/0.log" Feb 03 13:54:00 crc kubenswrapper[4820]: I0203 13:54:00.101759 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f6a9118b-1d0e-4baf-92ca-c4024a45dd2e/init-config-reloader/0.log" Feb 03 13:54:00 crc kubenswrapper[4820]: I0203 13:54:00.268767 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-656b464f74-h7xjt_43ecc5a4-8bd1-435c-8514-de23a493ee45/placement-log/0.log" Feb 03 13:54:00 crc kubenswrapper[4820]: I0203 13:54:00.275318 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f6a9118b-1d0e-4baf-92ca-c4024a45dd2e/init-config-reloader/0.log" Feb 03 13:54:00 crc kubenswrapper[4820]: I0203 13:54:00.390103 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f6a9118b-1d0e-4baf-92ca-c4024a45dd2e/config-reloader/0.log" Feb 03 13:54:00 crc kubenswrapper[4820]: I0203 13:54:00.419864 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f6a9118b-1d0e-4baf-92ca-c4024a45dd2e/prometheus/0.log" Feb 03 13:54:00 crc kubenswrapper[4820]: I0203 13:54:00.523348 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f6a9118b-1d0e-4baf-92ca-c4024a45dd2e/thanos-sidecar/0.log" Feb 03 13:54:00 crc kubenswrapper[4820]: I0203 13:54:00.677419 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c2cfe24f-4614-4f48-867c-722af03baad7/setup-container/0.log" Feb 03 13:54:00 crc kubenswrapper[4820]: I0203 13:54:00.964256 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ed109a9d-a703-4fa2-b7b3-0b96760d52b1/setup-container/0.log" Feb 03 13:54:01 crc kubenswrapper[4820]: I0203 13:54:01.001072 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c2cfe24f-4614-4f48-867c-722af03baad7/setup-container/0.log" Feb 03 13:54:01 crc kubenswrapper[4820]: I0203 13:54:01.005690 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c2cfe24f-4614-4f48-867c-722af03baad7/rabbitmq/0.log" Feb 03 13:54:01 crc kubenswrapper[4820]: I0203 13:54:01.202327 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ed109a9d-a703-4fa2-b7b3-0b96760d52b1/setup-container/0.log" Feb 03 13:54:01 crc kubenswrapper[4820]: I0203 13:54:01.254597 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2tqvr_02202494-64ad-452c-ad31-b76746e7e746/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:54:01 crc kubenswrapper[4820]: I0203 13:54:01.320054 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ed109a9d-a703-4fa2-b7b3-0b96760d52b1/rabbitmq/0.log" Feb 03 13:54:01 crc kubenswrapper[4820]: I0203 13:54:01.462019 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-5gdkr_a7717d9c-63f8-493f-be01-0fdea46ef053/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:54:01 crc kubenswrapper[4820]: I0203 13:54:01.573524 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-dkdr4_d8d69bce-1404-4fce-ab56-a8d4c9f46b28/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:54:01 crc kubenswrapper[4820]: I0203 13:54:01.724390 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-dzskl_fe0dcc37-428f-4efa-a725-e4361affcacd/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:54:01 crc kubenswrapper[4820]: I0203 13:54:01.852654 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-g9h74_dc7a208f-6c45-4374-ace1-70b2e16c499c/ssh-known-hosts-edpm-deployment/0.log" Feb 03 13:54:02 crc kubenswrapper[4820]: I0203 13:54:02.077798 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-646ccfdf87-kdlkr_e530e04a-6fa7-4cc2-be2a-46a26eec64a5/proxy-server/0.log" Feb 03 13:54:02 crc kubenswrapper[4820]: I0203 13:54:02.229803 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-pslmr_94423319-f57f-47dd-80db-db41374dcb25/swift-ring-rebalance/0.log" Feb 03 13:54:02 crc kubenswrapper[4820]: I0203 13:54:02.251093 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-646ccfdf87-kdlkr_e530e04a-6fa7-4cc2-be2a-46a26eec64a5/proxy-httpd/0.log" Feb 03 13:54:02 crc kubenswrapper[4820]: I0203 13:54:02.389798 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/account-auditor/0.log" Feb 03 13:54:02 crc kubenswrapper[4820]: I0203 13:54:02.512041 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/account-reaper/0.log" Feb 03 13:54:02 crc kubenswrapper[4820]: I0203 13:54:02.564616 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/account-replicator/0.log" Feb 03 13:54:02 crc kubenswrapper[4820]: I0203 13:54:02.610614 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/container-auditor/0.log" Feb 03 13:54:02 crc kubenswrapper[4820]: I0203 13:54:02.634420 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/account-server/0.log" Feb 03 13:54:02 crc kubenswrapper[4820]: I0203 13:54:02.950989 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/container-server/0.log" Feb 03 13:54:02 crc kubenswrapper[4820]: I0203 13:54:02.991192 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/container-replicator/0.log" Feb 03 13:54:03 crc kubenswrapper[4820]: I0203 13:54:03.033213 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/container-updater/0.log" Feb 03 13:54:03 crc kubenswrapper[4820]: I0203 13:54:03.058565 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/object-auditor/0.log" Feb 03 13:54:03 crc kubenswrapper[4820]: I0203 13:54:03.170817 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/object-expirer/0.log" Feb 03 13:54:03 crc kubenswrapper[4820]: I0203 13:54:03.236632 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/object-replicator/0.log" Feb 03 13:54:03 crc kubenswrapper[4820]: I0203 13:54:03.267926 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/object-server/0.log" Feb 03 13:54:03 crc kubenswrapper[4820]: I0203 13:54:03.311636 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/object-updater/0.log" Feb 03 13:54:03 crc kubenswrapper[4820]: I0203 13:54:03.392700 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/rsync/0.log" Feb 03 13:54:03 crc kubenswrapper[4820]: I0203 13:54:03.528731 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4eb10ed-a945-4b23-8fb3-62022a90e09f/swift-recon-cron/0.log" Feb 03 13:54:03 crc kubenswrapper[4820]: I0203 13:54:03.587765 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-g98lk_9dba6be1-f601-4959-8c1f-791b7fb032b8/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:54:03 crc kubenswrapper[4820]: I0203 13:54:03.802039 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a52d7dcc-1107-47d1-b270-0601e9dc2b1b/tempest-tests-tempest-tests-runner/0.log" Feb 03 13:54:04 crc kubenswrapper[4820]: I0203 13:54:04.091578 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9f3d9ead-7790-4cbb-a70c-51aa29d87eef/test-operator-logs-container/0.log" Feb 03 13:54:04 crc kubenswrapper[4820]: I0203 13:54:04.247872 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-nqb5x_ee96f9e1-369f-4e88-9766-419a9a05abe5/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:54:05 crc kubenswrapper[4820]: I0203 13:54:05.087174 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_6ed16a73-0e39-4ac4-bd01-820e6a7a45b0/watcher-applier/0.log" Feb 03 13:54:05 crc kubenswrapper[4820]: I0203 13:54:05.719603 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7cd4de1e-997d-4df1-9ad5-2049937ab135/watcher-api-log/0.log" Feb 03 13:54:06 crc kubenswrapper[4820]: I0203 13:54:06.651630 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_8a78aa6d-1dad-4a68-8ec3-f455e7d21fbe/watcher-decision-engine/0.log" Feb 03 13:54:07 crc kubenswrapper[4820]: I0203 13:54:07.099763 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ace9a08e-e106-4d85-ae21-3d7d6ea60dff/memcached/0.log" Feb 03 13:54:08 crc kubenswrapper[4820]: I0203 13:54:08.943308 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7cd4de1e-997d-4df1-9ad5-2049937ab135/watcher-api/0.log" Feb 03 13:54:10 crc kubenswrapper[4820]: I0203 13:54:10.142824 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:54:10 crc kubenswrapper[4820]: E0203 13:54:10.143432 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:54:23 crc kubenswrapper[4820]: I0203 13:54:23.152613 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:54:23 crc kubenswrapper[4820]: E0203 13:54:23.153585 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:54:35 crc kubenswrapper[4820]: I0203 13:54:35.469404 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl_7a0d0284-7ac0-4e09-ba63-1fa33dbbb574/util/0.log" Feb 03 13:54:35 crc kubenswrapper[4820]: I0203 13:54:35.673130 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl_7a0d0284-7ac0-4e09-ba63-1fa33dbbb574/pull/0.log" Feb 03 13:54:35 crc kubenswrapper[4820]: I0203 13:54:35.688542 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl_7a0d0284-7ac0-4e09-ba63-1fa33dbbb574/util/0.log" Feb 03 13:54:35 crc kubenswrapper[4820]: I0203 13:54:35.761666 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl_7a0d0284-7ac0-4e09-ba63-1fa33dbbb574/pull/0.log" Feb 03 13:54:35 crc kubenswrapper[4820]: I0203 13:54:35.931948 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl_7a0d0284-7ac0-4e09-ba63-1fa33dbbb574/extract/0.log" Feb 03 13:54:35 crc kubenswrapper[4820]: I0203 13:54:35.958056 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl_7a0d0284-7ac0-4e09-ba63-1fa33dbbb574/util/0.log" Feb 03 13:54:35 crc kubenswrapper[4820]: I0203 13:54:35.988657 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_06b2e162f0c49a09bca642c96fba7f5ff7d8911993e14c444810042e45lq6nl_7a0d0284-7ac0-4e09-ba63-1fa33dbbb574/pull/0.log" Feb 03 13:54:36 crc kubenswrapper[4820]: I0203 13:54:36.203157 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-qnh2k_cde1eaee-12a0-47f7-b88a-b1b97d0ed74b/manager/0.log" Feb 03 13:54:36 crc kubenswrapper[4820]: I0203 13:54:36.222989 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-z8jk7_4d0ea57e-5eb3-4624-a299-b9a7ad6f2bb0/manager/0.log" Feb 03 13:54:36 crc kubenswrapper[4820]: I0203 13:54:36.605783 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-wsb7r_51c967b2-8f1a-4d0d-a3f9-745e72863b84/manager/0.log" Feb 03 13:54:36 crc kubenswrapper[4820]: I0203 13:54:36.663913 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-5dmwb_88eb8fcd-4721-45c2-bb00-23b1dc962283/manager/0.log" Feb 03 13:54:36 crc kubenswrapper[4820]: I0203 13:54:36.815744 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-6fw2d_7f5efd7c-09f4-42b0-ba17-7a7dc609d914/manager/0.log" Feb 03 13:54:36 crc kubenswrapper[4820]: I0203 13:54:36.883592 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-t5mj4_101ca31b-ff08-4a49-9cc1-f48fd8679116/manager/0.log" Feb 03 13:54:37 crc kubenswrapper[4820]: I0203 13:54:37.104789 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-xkj2j_29dd9257-532e-48a4-9500-adfc5584ebe0/manager/0.log" Feb 03 13:54:37 crc kubenswrapper[4820]: I0203 13:54:37.312118 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-22gr9_7ad36bba-9140-4660-b4ed-e873264c9e22/manager/0.log" Feb 03 13:54:37 crc kubenswrapper[4820]: I0203 13:54:37.423229 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-9rprq_614c5412-875d-40b1-ad5f-445a941285af/manager/0.log" Feb 03 13:54:37 crc kubenswrapper[4820]: I0203 13:54:37.452514 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-rdbrk_4ebad58b-3e3b-4bcb-9a80-dedd97e940d0/manager/0.log" Feb 03 13:54:37 crc kubenswrapper[4820]: I0203 13:54:37.616016 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-qnmp6_40fd2238-8148-4aa3-8f4e-54ffc1de0805/manager/0.log" Feb 03 13:54:37 crc kubenswrapper[4820]: I0203 13:54:37.744258 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-4tmqm_81450158-204d-45f5-a1bc-de63e889445d/manager/0.log" Feb 03 13:54:37 crc kubenswrapper[4820]: I0203 13:54:37.882056 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-4fgnl_7a515408-dc44-4fba-bbe9-8b5f36fbc1d0/manager/0.log" Feb 03 13:54:37 crc kubenswrapper[4820]: I0203 13:54:37.935333 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-brrn4_56b6c45e-8879-4a30-8b7c-d6c7df8ac6ae/manager/0.log" Feb 03 13:54:38 crc kubenswrapper[4820]: I0203 13:54:38.083187 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dqz6p9_591b67aa-03c7-4cf7-8918-17e2f7a428b0/manager/0.log" Feb 03 13:54:38 crc kubenswrapper[4820]: I0203 13:54:38.142527 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:54:38 crc kubenswrapper[4820]: E0203 13:54:38.142882 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:54:38 crc kubenswrapper[4820]: I0203 13:54:38.282261 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-8c5c9674b-tdfgs_a781fb7c-cb52-4076-aa3c-5792d8ab7e42/operator/0.log" Feb 03 13:54:38 crc kubenswrapper[4820]: I0203 13:54:38.556162 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bpd2f_5432ffce-4da9-4a8a-9738-4e9dd0ee9a6a/registry-server/0.log" Feb 03 13:54:38 crc kubenswrapper[4820]: I0203 13:54:38.815064 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-5lxrd_b12dfc88-bdbd-4874-b397-9273a669e57f/manager/0.log" Feb 03 13:54:38 crc kubenswrapper[4820]: I0203 13:54:38.956557 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-x567r_851ed64f-f147-45d0-a33b-eea29903ec0a/manager/0.log" Feb 03 13:54:39 crc kubenswrapper[4820]: I0203 13:54:39.216659 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8l49q_8560a157-03d5-4135-a5e1-32acc68b6e4e/operator/0.log" Feb 03 13:54:39 crc kubenswrapper[4820]: I0203 13:54:39.500972 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-dr7hd_21f3efdd-0c83-42cb-8b54-b0554534bfb7/manager/0.log" Feb 03 13:54:39 crc kubenswrapper[4820]: I0203 13:54:39.736569 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-22hg8_1058185d-f11d-4a87-9fe6-005f60186329/manager/0.log" Feb 03 13:54:39 crc kubenswrapper[4820]: I0203 13:54:39.788759 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-855575688d-cl9c5_ffe7d059-602c-4fbc-bd5e-4c092cc6f3db/manager/0.log" Feb 03 13:54:39 crc kubenswrapper[4820]: I0203 13:54:39.810129 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-xw4mq_96838cc3-1b9b-41b3-b20e-476319c65436/manager/0.log" Feb 03 13:54:39 crc kubenswrapper[4820]: I0203 13:54:39.983490 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d49495bcf-pflss_18a84695-492b-42ae-9d72-6e582316ce55/manager/0.log" Feb 03 13:54:50 crc kubenswrapper[4820]: I0203 13:54:50.142492 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:54:50 crc kubenswrapper[4820]: E0203 13:54:50.143341 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:54:53 crc kubenswrapper[4820]: I0203 13:54:53.645364 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s4jmq"] Feb 03 13:54:53 crc kubenswrapper[4820]: E0203 13:54:53.648042 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742a64da-fbfa-4ef8-9c68-6407a2f8d43c" containerName="container-00" Feb 03 13:54:53 crc kubenswrapper[4820]: I0203 13:54:53.648078 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="742a64da-fbfa-4ef8-9c68-6407a2f8d43c" containerName="container-00" Feb 03 13:54:53 crc kubenswrapper[4820]: I0203 13:54:53.648371 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="742a64da-fbfa-4ef8-9c68-6407a2f8d43c" containerName="container-00" Feb 03 13:54:53 crc kubenswrapper[4820]: I0203 13:54:53.650198 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:54:53 crc kubenswrapper[4820]: I0203 13:54:53.674058 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4jmq"] Feb 03 13:54:53 crc kubenswrapper[4820]: I0203 13:54:53.696477 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h98zb\" (UniqueName: \"kubernetes.io/projected/df272ff2-4518-42db-b8ed-387750dc77e1-kube-api-access-h98zb\") pod \"redhat-operators-s4jmq\" (UID: \"df272ff2-4518-42db-b8ed-387750dc77e1\") " pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:54:53 crc kubenswrapper[4820]: I0203 13:54:53.696613 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df272ff2-4518-42db-b8ed-387750dc77e1-utilities\") pod \"redhat-operators-s4jmq\" (UID: \"df272ff2-4518-42db-b8ed-387750dc77e1\") " pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:54:53 crc kubenswrapper[4820]: I0203 13:54:53.696747 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df272ff2-4518-42db-b8ed-387750dc77e1-catalog-content\") pod \"redhat-operators-s4jmq\" (UID: \"df272ff2-4518-42db-b8ed-387750dc77e1\") " pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:54:53 crc kubenswrapper[4820]: I0203 13:54:53.799062 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df272ff2-4518-42db-b8ed-387750dc77e1-catalog-content\") pod \"redhat-operators-s4jmq\" (UID: \"df272ff2-4518-42db-b8ed-387750dc77e1\") " pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:54:53 crc kubenswrapper[4820]: I0203 13:54:53.799223 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h98zb\" (UniqueName: \"kubernetes.io/projected/df272ff2-4518-42db-b8ed-387750dc77e1-kube-api-access-h98zb\") pod \"redhat-operators-s4jmq\" (UID: \"df272ff2-4518-42db-b8ed-387750dc77e1\") " pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:54:53 crc kubenswrapper[4820]: I0203 13:54:53.799276 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df272ff2-4518-42db-b8ed-387750dc77e1-utilities\") pod \"redhat-operators-s4jmq\" (UID: \"df272ff2-4518-42db-b8ed-387750dc77e1\") " pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:54:53 crc kubenswrapper[4820]: I0203 13:54:53.799620 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df272ff2-4518-42db-b8ed-387750dc77e1-catalog-content\") pod \"redhat-operators-s4jmq\" (UID: \"df272ff2-4518-42db-b8ed-387750dc77e1\") " pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:54:53 crc kubenswrapper[4820]: I0203 13:54:53.799709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df272ff2-4518-42db-b8ed-387750dc77e1-utilities\") pod \"redhat-operators-s4jmq\" (UID: \"df272ff2-4518-42db-b8ed-387750dc77e1\") " pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:54:53 crc kubenswrapper[4820]: I0203 13:54:53.821985 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h98zb\" (UniqueName: \"kubernetes.io/projected/df272ff2-4518-42db-b8ed-387750dc77e1-kube-api-access-h98zb\") pod \"redhat-operators-s4jmq\" (UID: \"df272ff2-4518-42db-b8ed-387750dc77e1\") " pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:54:54 crc kubenswrapper[4820]: I0203 13:54:54.024545 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:54:54 crc kubenswrapper[4820]: I0203 13:54:54.563214 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4jmq"] Feb 03 13:54:54 crc kubenswrapper[4820]: W0203 13:54:54.566810 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf272ff2_4518_42db_b8ed_387750dc77e1.slice/crio-e5c32262cc9acd6f593bd1655275978a46023bbaa0ff6ca667ab9710242fea3a WatchSource:0}: Error finding container e5c32262cc9acd6f593bd1655275978a46023bbaa0ff6ca667ab9710242fea3a: Status 404 returned error can't find the container with id e5c32262cc9acd6f593bd1655275978a46023bbaa0ff6ca667ab9710242fea3a Feb 03 13:54:55 crc kubenswrapper[4820]: I0203 13:54:55.527832 4820 generic.go:334] "Generic (PLEG): container finished" podID="df272ff2-4518-42db-b8ed-387750dc77e1" containerID="9e382ab6327b4341a2ab0557a7b4dc00e3a38dfcf94824416218acff5ad8f639" exitCode=0 Feb 03 13:54:55 crc kubenswrapper[4820]: I0203 13:54:55.527943 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4jmq" event={"ID":"df272ff2-4518-42db-b8ed-387750dc77e1","Type":"ContainerDied","Data":"9e382ab6327b4341a2ab0557a7b4dc00e3a38dfcf94824416218acff5ad8f639"} Feb 03 13:54:55 crc kubenswrapper[4820]: I0203 13:54:55.528389 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4jmq" event={"ID":"df272ff2-4518-42db-b8ed-387750dc77e1","Type":"ContainerStarted","Data":"e5c32262cc9acd6f593bd1655275978a46023bbaa0ff6ca667ab9710242fea3a"} Feb 03 13:54:57 crc kubenswrapper[4820]: I0203 13:54:57.575568 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4jmq" event={"ID":"df272ff2-4518-42db-b8ed-387750dc77e1","Type":"ContainerStarted","Data":"f29c6497f7be3b5d4417bd0e671cef53935434876a9273605128aafb7198aa7e"} Feb 03 13:55:01 crc kubenswrapper[4820]: I0203 13:55:01.143563 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:55:01 crc kubenswrapper[4820]: E0203 13:55:01.144603 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:55:01 crc kubenswrapper[4820]: I0203 13:55:01.268290 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gqwld_8237a118-001c-483c-8810-d051f33d35eb/control-plane-machine-set-operator/0.log" Feb 03 13:55:01 crc kubenswrapper[4820]: I0203 13:55:01.511914 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hxjbf_6b522a8e-f795-4cf1-adbb-899674a5e359/kube-rbac-proxy/0.log" Feb 03 13:55:01 crc kubenswrapper[4820]: I0203 13:55:01.518328 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hxjbf_6b522a8e-f795-4cf1-adbb-899674a5e359/machine-api-operator/0.log" Feb 03 13:55:06 crc kubenswrapper[4820]: I0203 13:55:06.665182 4820 generic.go:334] "Generic (PLEG): container finished" podID="df272ff2-4518-42db-b8ed-387750dc77e1" containerID="f29c6497f7be3b5d4417bd0e671cef53935434876a9273605128aafb7198aa7e" exitCode=0 Feb 03 13:55:06 crc kubenswrapper[4820]: I0203 13:55:06.665480 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4jmq" event={"ID":"df272ff2-4518-42db-b8ed-387750dc77e1","Type":"ContainerDied","Data":"f29c6497f7be3b5d4417bd0e671cef53935434876a9273605128aafb7198aa7e"} Feb 03 13:55:07 crc kubenswrapper[4820]: I0203 13:55:07.679318 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4jmq" event={"ID":"df272ff2-4518-42db-b8ed-387750dc77e1","Type":"ContainerStarted","Data":"598fe84ee81707015f47e1366a78740b5eac1b8e7ebc1cc6e49972e1f39ff24d"} Feb 03 13:55:07 crc kubenswrapper[4820]: I0203 13:55:07.709477 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s4jmq" podStartSLOduration=2.996202905 podStartE2EDuration="14.709434225s" podCreationTimestamp="2026-02-03 13:54:53 +0000 UTC" firstStartedPulling="2026-02-03 13:54:55.529976514 +0000 UTC m=+6613.053052378" lastFinishedPulling="2026-02-03 13:55:07.243207834 +0000 UTC m=+6624.766283698" observedRunningTime="2026-02-03 13:55:07.70627989 +0000 UTC m=+6625.229355754" watchObservedRunningTime="2026-02-03 13:55:07.709434225 +0000 UTC m=+6625.232510089" Feb 03 13:55:13 crc kubenswrapper[4820]: I0203 13:55:13.151407 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:55:13 crc kubenswrapper[4820]: E0203 13:55:13.152382 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:55:14 crc kubenswrapper[4820]: I0203 13:55:14.025452 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:55:14 crc kubenswrapper[4820]: I0203 13:55:14.025523 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:55:15 crc kubenswrapper[4820]: I0203 13:55:15.077143 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s4jmq" podUID="df272ff2-4518-42db-b8ed-387750dc77e1" containerName="registry-server" probeResult="failure" output=< Feb 03 13:55:15 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 13:55:15 crc kubenswrapper[4820]: > Feb 03 13:55:15 crc kubenswrapper[4820]: I0203 13:55:15.721098 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-29s2s_e98f8274-774b-446d-ae13-e7e7d4697463/cert-manager-controller/0.log" Feb 03 13:55:16 crc kubenswrapper[4820]: I0203 13:55:16.067579 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-vdqjl_3853758e-3847-4715-8b8a-85022e708c75/cert-manager-cainjector/0.log" Feb 03 13:55:16 crc kubenswrapper[4820]: I0203 13:55:16.121751 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-lb2pj_f84b05bb-fe6d-4dcb-9501-375683557250/cert-manager-webhook/0.log" Feb 03 13:55:25 crc kubenswrapper[4820]: I0203 13:55:25.076093 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s4jmq" podUID="df272ff2-4518-42db-b8ed-387750dc77e1" containerName="registry-server" probeResult="failure" output=< Feb 03 13:55:25 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 13:55:25 crc kubenswrapper[4820]: > Feb 03 13:55:25 crc kubenswrapper[4820]: I0203 13:55:25.143019 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:55:25 crc kubenswrapper[4820]: E0203 13:55:25.143377 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:55:30 crc kubenswrapper[4820]: I0203 13:55:30.919869 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-hcd62_3f652654-b0e0-47f3-b1db-9930c6b681c6/nmstate-console-plugin/0.log" Feb 03 13:55:31 crc kubenswrapper[4820]: I0203 13:55:31.157685 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sbsh5_afaac7f6-f06f-4eb6-b2a5-85c9c2f927d3/nmstate-handler/0.log" Feb 03 13:55:31 crc kubenswrapper[4820]: I0203 13:55:31.284093 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vprcr_25a587ed-7ff6-4ffd-b2ad-5a88a81c7867/kube-rbac-proxy/0.log" Feb 03 13:55:31 crc kubenswrapper[4820]: I0203 13:55:31.409714 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vprcr_25a587ed-7ff6-4ffd-b2ad-5a88a81c7867/nmstate-metrics/0.log" Feb 03 13:55:31 crc kubenswrapper[4820]: I0203 13:55:31.477481 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-gcnrh_3cc69a01-8e9a-4d98-9568-841c499eb0f0/nmstate-operator/0.log" Feb 03 13:55:31 crc kubenswrapper[4820]: I0203 13:55:31.633899 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-2tnxr_23a0cc00-e454-4afc-82bb-0d79c0b76324/nmstate-webhook/0.log" Feb 03 13:55:35 crc kubenswrapper[4820]: I0203 13:55:35.070667 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s4jmq" podUID="df272ff2-4518-42db-b8ed-387750dc77e1" containerName="registry-server" probeResult="failure" output=< Feb 03 13:55:35 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 13:55:35 crc kubenswrapper[4820]: > Feb 03 13:55:40 crc kubenswrapper[4820]: I0203 13:55:40.142790 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:55:40 crc kubenswrapper[4820]: E0203 13:55:40.144739 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:55:44 crc kubenswrapper[4820]: I0203 13:55:44.074218 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:55:44 crc kubenswrapper[4820]: I0203 13:55:44.138516 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:55:44 crc kubenswrapper[4820]: I0203 13:55:44.316031 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s4jmq"] Feb 03 13:55:46 crc kubenswrapper[4820]: I0203 13:55:46.079398 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s4jmq" podUID="df272ff2-4518-42db-b8ed-387750dc77e1" containerName="registry-server" containerID="cri-o://598fe84ee81707015f47e1366a78740b5eac1b8e7ebc1cc6e49972e1f39ff24d" gracePeriod=2 Feb 03 13:55:46 crc kubenswrapper[4820]: I0203 13:55:46.597256 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:55:46 crc kubenswrapper[4820]: I0203 13:55:46.725748 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df272ff2-4518-42db-b8ed-387750dc77e1-catalog-content\") pod \"df272ff2-4518-42db-b8ed-387750dc77e1\" (UID: \"df272ff2-4518-42db-b8ed-387750dc77e1\") " Feb 03 13:55:46 crc kubenswrapper[4820]: I0203 13:55:46.725834 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df272ff2-4518-42db-b8ed-387750dc77e1-utilities\") pod \"df272ff2-4518-42db-b8ed-387750dc77e1\" (UID: \"df272ff2-4518-42db-b8ed-387750dc77e1\") " Feb 03 13:55:46 crc kubenswrapper[4820]: I0203 13:55:46.725911 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h98zb\" (UniqueName: \"kubernetes.io/projected/df272ff2-4518-42db-b8ed-387750dc77e1-kube-api-access-h98zb\") pod \"df272ff2-4518-42db-b8ed-387750dc77e1\" (UID: \"df272ff2-4518-42db-b8ed-387750dc77e1\") " Feb 03 13:55:46 crc kubenswrapper[4820]: I0203 13:55:46.726964 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df272ff2-4518-42db-b8ed-387750dc77e1-utilities" (OuterVolumeSpecName: "utilities") pod "df272ff2-4518-42db-b8ed-387750dc77e1" (UID: "df272ff2-4518-42db-b8ed-387750dc77e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:55:46 crc kubenswrapper[4820]: I0203 13:55:46.740912 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df272ff2-4518-42db-b8ed-387750dc77e1-kube-api-access-h98zb" (OuterVolumeSpecName: "kube-api-access-h98zb") pod "df272ff2-4518-42db-b8ed-387750dc77e1" (UID: "df272ff2-4518-42db-b8ed-387750dc77e1"). InnerVolumeSpecName "kube-api-access-h98zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:55:46 crc kubenswrapper[4820]: I0203 13:55:46.829654 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df272ff2-4518-42db-b8ed-387750dc77e1-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:55:46 crc kubenswrapper[4820]: I0203 13:55:46.830158 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h98zb\" (UniqueName: \"kubernetes.io/projected/df272ff2-4518-42db-b8ed-387750dc77e1-kube-api-access-h98zb\") on node \"crc\" DevicePath \"\"" Feb 03 13:55:46 crc kubenswrapper[4820]: I0203 13:55:46.879752 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df272ff2-4518-42db-b8ed-387750dc77e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df272ff2-4518-42db-b8ed-387750dc77e1" (UID: "df272ff2-4518-42db-b8ed-387750dc77e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:55:46 crc kubenswrapper[4820]: I0203 13:55:46.932645 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df272ff2-4518-42db-b8ed-387750dc77e1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.091258 4820 generic.go:334] "Generic (PLEG): container finished" podID="df272ff2-4518-42db-b8ed-387750dc77e1" containerID="598fe84ee81707015f47e1366a78740b5eac1b8e7ebc1cc6e49972e1f39ff24d" exitCode=0 Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.091379 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4jmq" event={"ID":"df272ff2-4518-42db-b8ed-387750dc77e1","Type":"ContainerDied","Data":"598fe84ee81707015f47e1366a78740b5eac1b8e7ebc1cc6e49972e1f39ff24d"} Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.092442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4jmq" event={"ID":"df272ff2-4518-42db-b8ed-387750dc77e1","Type":"ContainerDied","Data":"e5c32262cc9acd6f593bd1655275978a46023bbaa0ff6ca667ab9710242fea3a"} Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.091417 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4jmq" Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.092504 4820 scope.go:117] "RemoveContainer" containerID="598fe84ee81707015f47e1366a78740b5eac1b8e7ebc1cc6e49972e1f39ff24d" Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.116853 4820 scope.go:117] "RemoveContainer" containerID="f29c6497f7be3b5d4417bd0e671cef53935434876a9273605128aafb7198aa7e" Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.139138 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s4jmq"] Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.159439 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s4jmq"] Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.159612 4820 scope.go:117] "RemoveContainer" containerID="9e382ab6327b4341a2ab0557a7b4dc00e3a38dfcf94824416218acff5ad8f639" Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.207792 4820 scope.go:117] "RemoveContainer" containerID="598fe84ee81707015f47e1366a78740b5eac1b8e7ebc1cc6e49972e1f39ff24d" Feb 03 13:55:47 crc kubenswrapper[4820]: E0203 13:55:47.208477 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"598fe84ee81707015f47e1366a78740b5eac1b8e7ebc1cc6e49972e1f39ff24d\": container with ID starting with 598fe84ee81707015f47e1366a78740b5eac1b8e7ebc1cc6e49972e1f39ff24d not found: ID does not exist" containerID="598fe84ee81707015f47e1366a78740b5eac1b8e7ebc1cc6e49972e1f39ff24d" Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.208534 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"598fe84ee81707015f47e1366a78740b5eac1b8e7ebc1cc6e49972e1f39ff24d"} err="failed to get container status \"598fe84ee81707015f47e1366a78740b5eac1b8e7ebc1cc6e49972e1f39ff24d\": rpc error: code = NotFound desc = could not find container \"598fe84ee81707015f47e1366a78740b5eac1b8e7ebc1cc6e49972e1f39ff24d\": container with ID starting with 598fe84ee81707015f47e1366a78740b5eac1b8e7ebc1cc6e49972e1f39ff24d not found: ID does not exist" Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.208562 4820 scope.go:117] "RemoveContainer" containerID="f29c6497f7be3b5d4417bd0e671cef53935434876a9273605128aafb7198aa7e" Feb 03 13:55:47 crc kubenswrapper[4820]: E0203 13:55:47.209052 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f29c6497f7be3b5d4417bd0e671cef53935434876a9273605128aafb7198aa7e\": container with ID starting with f29c6497f7be3b5d4417bd0e671cef53935434876a9273605128aafb7198aa7e not found: ID does not exist" containerID="f29c6497f7be3b5d4417bd0e671cef53935434876a9273605128aafb7198aa7e" Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.209101 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f29c6497f7be3b5d4417bd0e671cef53935434876a9273605128aafb7198aa7e"} err="failed to get container status \"f29c6497f7be3b5d4417bd0e671cef53935434876a9273605128aafb7198aa7e\": rpc error: code = NotFound desc = could not find container \"f29c6497f7be3b5d4417bd0e671cef53935434876a9273605128aafb7198aa7e\": container with ID starting with f29c6497f7be3b5d4417bd0e671cef53935434876a9273605128aafb7198aa7e not found: ID does not exist" Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.209135 4820 scope.go:117] "RemoveContainer" containerID="9e382ab6327b4341a2ab0557a7b4dc00e3a38dfcf94824416218acff5ad8f639" Feb 03 13:55:47 crc kubenswrapper[4820]: E0203 13:55:47.209481 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e382ab6327b4341a2ab0557a7b4dc00e3a38dfcf94824416218acff5ad8f639\": container with ID starting with 9e382ab6327b4341a2ab0557a7b4dc00e3a38dfcf94824416218acff5ad8f639 not found: ID does not exist" containerID="9e382ab6327b4341a2ab0557a7b4dc00e3a38dfcf94824416218acff5ad8f639" Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.209511 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e382ab6327b4341a2ab0557a7b4dc00e3a38dfcf94824416218acff5ad8f639"} err="failed to get container status \"9e382ab6327b4341a2ab0557a7b4dc00e3a38dfcf94824416218acff5ad8f639\": rpc error: code = NotFound desc = could not find container \"9e382ab6327b4341a2ab0557a7b4dc00e3a38dfcf94824416218acff5ad8f639\": container with ID starting with 9e382ab6327b4341a2ab0557a7b4dc00e3a38dfcf94824416218acff5ad8f639 not found: ID does not exist" Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.718073 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9jzgf_c1ad6c2d-5ab9-4904-9426-00ebf486a90d/prometheus-operator/0.log" Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.898153 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m_67c9fe0e-5cc6-469b-90a0-11adfac994cc/prometheus-operator-admission-webhook/0.log" Feb 03 13:55:47 crc kubenswrapper[4820]: I0203 13:55:47.994667 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv_3202dd82-6cc2-478c-9eb1-7810a23ce4bb/prometheus-operator-admission-webhook/0.log" Feb 03 13:55:48 crc kubenswrapper[4820]: I0203 13:55:48.188996 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-lshn6_c22a4473-b3ac-4b33-9a20-320b76c330ab/operator/0.log" Feb 03 13:55:48 crc kubenswrapper[4820]: I0203 13:55:48.248279 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-gx6fv_4f0df377-6a2b-4270-974f-3d178cdc47d9/perses-operator/0.log" Feb 03 13:55:49 crc kubenswrapper[4820]: I0203 13:55:49.157419 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df272ff2-4518-42db-b8ed-387750dc77e1" path="/var/lib/kubelet/pods/df272ff2-4518-42db-b8ed-387750dc77e1/volumes" Feb 03 13:55:51 crc kubenswrapper[4820]: I0203 13:55:51.143184 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:55:51 crc kubenswrapper[4820]: E0203 13:55:51.143774 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:56:03 crc kubenswrapper[4820]: I0203 13:56:03.772795 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bl7d9_a8856687-50aa-469b-acca-0c2e83d3a95a/kube-rbac-proxy/0.log" Feb 03 13:56:03 crc kubenswrapper[4820]: I0203 13:56:03.924949 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bl7d9_a8856687-50aa-469b-acca-0c2e83d3a95a/controller/0.log" Feb 03 13:56:04 crc kubenswrapper[4820]: I0203 13:56:04.065090 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-frr-files/0.log" Feb 03 13:56:04 crc kubenswrapper[4820]: I0203 13:56:04.253704 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-frr-files/0.log" Feb 03 13:56:04 crc kubenswrapper[4820]: I0203 13:56:04.253881 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-reloader/0.log" Feb 03 13:56:04 crc kubenswrapper[4820]: I0203 13:56:04.274122 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-metrics/0.log" Feb 03 13:56:04 crc kubenswrapper[4820]: I0203 13:56:04.307997 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-reloader/0.log" Feb 03 13:56:04 crc kubenswrapper[4820]: I0203 13:56:04.497468 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-frr-files/0.log" Feb 03 13:56:04 crc kubenswrapper[4820]: I0203 13:56:04.531104 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-reloader/0.log" Feb 03 13:56:04 crc kubenswrapper[4820]: I0203 13:56:04.560278 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-metrics/0.log" Feb 03 13:56:04 crc kubenswrapper[4820]: I0203 13:56:04.564215 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-metrics/0.log" Feb 03 13:56:04 crc kubenswrapper[4820]: I0203 13:56:04.777601 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-reloader/0.log" Feb 03 13:56:04 crc kubenswrapper[4820]: I0203 13:56:04.844506 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-frr-files/0.log" Feb 03 13:56:04 crc kubenswrapper[4820]: I0203 13:56:04.854958 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/cp-metrics/0.log" Feb 03 13:56:04 crc kubenswrapper[4820]: I0203 13:56:04.881110 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/controller/0.log" Feb 03 13:56:05 crc kubenswrapper[4820]: I0203 13:56:05.060540 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/frr-metrics/0.log" Feb 03 13:56:05 crc kubenswrapper[4820]: I0203 13:56:05.098230 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/kube-rbac-proxy-frr/0.log" Feb 03 13:56:05 crc kubenswrapper[4820]: I0203 13:56:05.131941 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/kube-rbac-proxy/0.log" Feb 03 13:56:05 crc kubenswrapper[4820]: I0203 13:56:05.328852 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/reloader/0.log" Feb 03 13:56:05 crc kubenswrapper[4820]: I0203 13:56:05.417364 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-d9c5m_11969ac0-96d5-4195-bfe8-f619e11db963/frr-k8s-webhook-server/0.log" Feb 03 13:56:05 crc kubenswrapper[4820]: I0203 13:56:05.615160 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7cbbb967bd-w5q2v_15d57aea-1890-4499-9c6b-ab4af2e3715c/manager/0.log" Feb 03 13:56:05 crc kubenswrapper[4820]: I0203 13:56:05.827596 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-84b6f7d797-4wm8w_50906228-b0d7-4552-916a-b4a010b7b346/webhook-server/0.log" Feb 03 13:56:05 crc kubenswrapper[4820]: I0203 13:56:05.943434 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-scj8c_8bc51efb-561f-4e59-960c-99f18a5ef7d8/kube-rbac-proxy/0.log" Feb 03 13:56:06 crc kubenswrapper[4820]: I0203 13:56:06.148584 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:56:06 crc kubenswrapper[4820]: E0203 13:56:06.149192 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:56:06 crc kubenswrapper[4820]: I0203 13:56:06.596462 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-scj8c_8bc51efb-561f-4e59-960c-99f18a5ef7d8/speaker/0.log" Feb 03 13:56:06 crc kubenswrapper[4820]: I0203 13:56:06.919805 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48tvq_21bb4d95-ca5f-4d31-b7e6-45e04a4e84f0/frr/0.log" Feb 03 13:56:19 crc kubenswrapper[4820]: I0203 13:56:19.567572 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx_da1615c1-bd74-4ac2-91ca-4a00a31366e6/util/0.log" Feb 03 13:56:19 crc kubenswrapper[4820]: I0203 13:56:19.742731 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx_da1615c1-bd74-4ac2-91ca-4a00a31366e6/pull/0.log" Feb 03 13:56:19 crc kubenswrapper[4820]: I0203 13:56:19.791369 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx_da1615c1-bd74-4ac2-91ca-4a00a31366e6/util/0.log" Feb 03 13:56:19 crc kubenswrapper[4820]: I0203 13:56:19.836244 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx_da1615c1-bd74-4ac2-91ca-4a00a31366e6/pull/0.log" Feb 03 13:56:19 crc kubenswrapper[4820]: I0203 13:56:19.999082 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx_da1615c1-bd74-4ac2-91ca-4a00a31366e6/extract/0.log" Feb 03 13:56:20 crc kubenswrapper[4820]: I0203 13:56:20.021741 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx_da1615c1-bd74-4ac2-91ca-4a00a31366e6/util/0.log" Feb 03 13:56:20 crc kubenswrapper[4820]: I0203 13:56:20.027745 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjjbzx_da1615c1-bd74-4ac2-91ca-4a00a31366e6/pull/0.log" Feb 03 13:56:20 crc kubenswrapper[4820]: I0203 13:56:20.142828 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:56:20 crc kubenswrapper[4820]: E0203 13:56:20.143574 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:56:20 crc kubenswrapper[4820]: I0203 13:56:20.182566 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh_dd3be9c9-9970-4055-b150-fb5ad093ef1e/util/0.log" Feb 03 13:56:20 crc kubenswrapper[4820]: I0203 13:56:20.376661 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh_dd3be9c9-9970-4055-b150-fb5ad093ef1e/util/0.log" Feb 03 13:56:20 crc kubenswrapper[4820]: I0203 13:56:20.377770 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh_dd3be9c9-9970-4055-b150-fb5ad093ef1e/pull/0.log" Feb 03 13:56:20 crc kubenswrapper[4820]: I0203 13:56:20.415005 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh_dd3be9c9-9970-4055-b150-fb5ad093ef1e/pull/0.log" Feb 03 13:56:20 crc kubenswrapper[4820]: I0203 13:56:20.558311 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh_dd3be9c9-9970-4055-b150-fb5ad093ef1e/util/0.log" Feb 03 13:56:20 crc kubenswrapper[4820]: I0203 13:56:20.600322 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh_dd3be9c9-9970-4055-b150-fb5ad093ef1e/pull/0.log" Feb 03 13:56:20 crc kubenswrapper[4820]: I0203 13:56:20.608401 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dz8hh_dd3be9c9-9970-4055-b150-fb5ad093ef1e/extract/0.log" Feb 03 13:56:20 crc kubenswrapper[4820]: I0203 13:56:20.780326 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt_73a0ef2f-bdcb-4042-813c-597bd2694e20/util/0.log" Feb 03 13:56:20 crc kubenswrapper[4820]: I0203 13:56:20.918147 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt_73a0ef2f-bdcb-4042-813c-597bd2694e20/util/0.log" Feb 03 13:56:20 crc kubenswrapper[4820]: I0203 13:56:20.953529 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt_73a0ef2f-bdcb-4042-813c-597bd2694e20/pull/0.log" Feb 03 13:56:20 crc kubenswrapper[4820]: I0203 13:56:20.971081 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt_73a0ef2f-bdcb-4042-813c-597bd2694e20/pull/0.log" Feb 03 13:56:21 crc kubenswrapper[4820]: I0203 13:56:21.225420 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt_73a0ef2f-bdcb-4042-813c-597bd2694e20/pull/0.log" Feb 03 13:56:21 crc kubenswrapper[4820]: I0203 13:56:21.225533 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt_73a0ef2f-bdcb-4042-813c-597bd2694e20/util/0.log" Feb 03 13:56:21 crc kubenswrapper[4820]: I0203 13:56:21.249133 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xt4wt_73a0ef2f-bdcb-4042-813c-597bd2694e20/extract/0.log" Feb 03 13:56:21 crc kubenswrapper[4820]: I0203 13:56:21.417752 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wngqc_4bd3b782-6780-4d50-9e3c-391f1930b50a/extract-utilities/0.log" Feb 03 13:56:21 crc kubenswrapper[4820]: I0203 13:56:21.579786 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wngqc_4bd3b782-6780-4d50-9e3c-391f1930b50a/extract-content/0.log" Feb 03 13:56:21 crc kubenswrapper[4820]: I0203 13:56:21.609271 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wngqc_4bd3b782-6780-4d50-9e3c-391f1930b50a/extract-utilities/0.log" Feb 03 13:56:21 crc kubenswrapper[4820]: I0203 13:56:21.613026 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wngqc_4bd3b782-6780-4d50-9e3c-391f1930b50a/extract-content/0.log" Feb 03 13:56:21 crc kubenswrapper[4820]: I0203 13:56:21.855602 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wngqc_4bd3b782-6780-4d50-9e3c-391f1930b50a/extract-content/0.log" Feb 03 13:56:21 crc kubenswrapper[4820]: I0203 13:56:21.904413 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wngqc_4bd3b782-6780-4d50-9e3c-391f1930b50a/extract-utilities/0.log" Feb 03 13:56:22 crc kubenswrapper[4820]: I0203 13:56:22.099564 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qt59j_aa46ef09-da2f-4b32-8091-4d745eff0174/extract-utilities/0.log" Feb 03 13:56:22 crc kubenswrapper[4820]: I0203 13:56:22.308502 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qt59j_aa46ef09-da2f-4b32-8091-4d745eff0174/extract-content/0.log" Feb 03 13:56:22 crc kubenswrapper[4820]: I0203 13:56:22.408856 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qt59j_aa46ef09-da2f-4b32-8091-4d745eff0174/extract-content/0.log" Feb 03 13:56:22 crc kubenswrapper[4820]: I0203 13:56:22.457504 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qt59j_aa46ef09-da2f-4b32-8091-4d745eff0174/extract-utilities/0.log" Feb 03 13:56:22 crc kubenswrapper[4820]: I0203 13:56:22.571544 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qt59j_aa46ef09-da2f-4b32-8091-4d745eff0174/extract-utilities/0.log" Feb 03 13:56:22 crc kubenswrapper[4820]: I0203 13:56:22.708264 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qt59j_aa46ef09-da2f-4b32-8091-4d745eff0174/extract-content/0.log" Feb 03 13:56:22 crc kubenswrapper[4820]: I0203 13:56:22.928146 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-qr29p_5bd7dcaf-6cbd-4e0a-a9f4-9e3cc9a2a738/marketplace-operator/0.log" Feb 03 13:56:23 crc kubenswrapper[4820]: I0203 13:56:23.213269 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kzpp_d2036cb3-d406-4eea-8eac-3fda178af56a/extract-utilities/0.log" Feb 03 13:56:23 crc kubenswrapper[4820]: I0203 13:56:23.249129 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wngqc_4bd3b782-6780-4d50-9e3c-391f1930b50a/registry-server/0.log" Feb 03 13:56:23 crc kubenswrapper[4820]: I0203 13:56:23.473871 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kzpp_d2036cb3-d406-4eea-8eac-3fda178af56a/extract-content/0.log" Feb 03 13:56:23 crc kubenswrapper[4820]: I0203 13:56:23.519578 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kzpp_d2036cb3-d406-4eea-8eac-3fda178af56a/extract-utilities/0.log" Feb 03 13:56:23 crc kubenswrapper[4820]: I0203 13:56:23.552014 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kzpp_d2036cb3-d406-4eea-8eac-3fda178af56a/extract-content/0.log" Feb 03 13:56:23 crc kubenswrapper[4820]: I0203 13:56:23.824837 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kzpp_d2036cb3-d406-4eea-8eac-3fda178af56a/extract-utilities/0.log" Feb 03 13:56:23 crc kubenswrapper[4820]: I0203 13:56:23.868748 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kzpp_d2036cb3-d406-4eea-8eac-3fda178af56a/extract-content/0.log" Feb 03 13:56:24 crc kubenswrapper[4820]: I0203 13:56:24.132244 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btszd_977ea1bf-f3ac-40c3-8061-bbf78da368c1/extract-utilities/0.log" Feb 03 13:56:24 crc kubenswrapper[4820]: I0203 13:56:24.242370 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6kzpp_d2036cb3-d406-4eea-8eac-3fda178af56a/registry-server/0.log" Feb 03 13:56:24 crc kubenswrapper[4820]: I0203 13:56:24.245922 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qt59j_aa46ef09-da2f-4b32-8091-4d745eff0174/registry-server/0.log" Feb 03 13:56:24 crc kubenswrapper[4820]: I0203 13:56:24.277311 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btszd_977ea1bf-f3ac-40c3-8061-bbf78da368c1/extract-content/0.log" Feb 03 13:56:24 crc kubenswrapper[4820]: I0203 13:56:24.327886 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btszd_977ea1bf-f3ac-40c3-8061-bbf78da368c1/extract-utilities/0.log" Feb 03 13:56:24 crc kubenswrapper[4820]: I0203 13:56:24.361189 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btszd_977ea1bf-f3ac-40c3-8061-bbf78da368c1/extract-content/0.log" Feb 03 13:56:24 crc kubenswrapper[4820]: I0203 13:56:24.561023 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btszd_977ea1bf-f3ac-40c3-8061-bbf78da368c1/extract-content/0.log" Feb 03 13:56:24 crc kubenswrapper[4820]: I0203 13:56:24.563900 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btszd_977ea1bf-f3ac-40c3-8061-bbf78da368c1/extract-utilities/0.log" Feb 03 13:56:25 crc kubenswrapper[4820]: I0203 13:56:25.015679 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btszd_977ea1bf-f3ac-40c3-8061-bbf78da368c1/registry-server/0.log" Feb 03 13:56:34 crc kubenswrapper[4820]: I0203 13:56:34.144167 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:56:34 crc kubenswrapper[4820]: E0203 13:56:34.144939 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:56:37 crc kubenswrapper[4820]: I0203 13:56:37.162171 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9jzgf_c1ad6c2d-5ab9-4904-9426-00ebf486a90d/prometheus-operator/0.log" Feb 03 13:56:37 crc kubenswrapper[4820]: I0203 13:56:37.178957 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64dd7f56f9-p4b6m_67c9fe0e-5cc6-469b-90a0-11adfac994cc/prometheus-operator-admission-webhook/0.log" Feb 03 13:56:37 crc kubenswrapper[4820]: I0203 13:56:37.197229 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64dd7f56f9-s8llv_3202dd82-6cc2-478c-9eb1-7810a23ce4bb/prometheus-operator-admission-webhook/0.log" Feb 03 13:56:37 crc kubenswrapper[4820]: I0203 13:56:37.401037 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-gx6fv_4f0df377-6a2b-4270-974f-3d178cdc47d9/perses-operator/0.log" Feb 03 13:56:37 crc kubenswrapper[4820]: I0203 13:56:37.417389 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-lshn6_c22a4473-b3ac-4b33-9a20-320b76c330ab/operator/0.log" Feb 03 13:56:48 crc kubenswrapper[4820]: I0203 13:56:48.147030 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:56:48 crc kubenswrapper[4820]: E0203 13:56:48.147917 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:57:00 crc kubenswrapper[4820]: I0203 13:57:00.143260 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:57:00 crc kubenswrapper[4820]: E0203 13:57:00.144022 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:57:12 crc kubenswrapper[4820]: I0203 13:57:12.144132 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:57:12 crc kubenswrapper[4820]: E0203 13:57:12.145605 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:57:25 crc kubenswrapper[4820]: I0203 13:57:25.142559 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:57:25 crc kubenswrapper[4820]: E0203 13:57:25.143628 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:57:36 crc kubenswrapper[4820]: I0203 13:57:36.143585 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:57:36 crc kubenswrapper[4820]: E0203 13:57:36.144428 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:57:50 crc kubenswrapper[4820]: I0203 13:57:50.143106 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:57:50 crc kubenswrapper[4820]: E0203 13:57:50.144001 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qj7xr_openshift-machine-config-operator(2c02def6-29f2-448e-80ec-0c8ee290f053)\"" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" Feb 03 13:58:03 crc kubenswrapper[4820]: I0203 13:58:03.150315 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 13:58:03 crc kubenswrapper[4820]: I0203 13:58:03.835248 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"70b78a0088511719d3b095cbe99ce0352bc07c00f13e90bb5b6511410838e13f"} Feb 03 13:58:48 crc kubenswrapper[4820]: I0203 13:58:48.205050 4820 scope.go:117] "RemoveContainer" containerID="e09604d34b209087d3fe0514d90a91f331d4bfeb0c16f58ada6c3aa67602c553" Feb 03 13:59:00 crc kubenswrapper[4820]: I0203 13:59:00.782082 4820 generic.go:334] "Generic (PLEG): container finished" podID="677c2b79-9984-4d3d-9aab-c3e3ff13315c" containerID="76d44b5a441b5e6ad1372b85b4d76e535308015895ca9c0db156f39b6e498902" exitCode=0 Feb 03 13:59:00 crc kubenswrapper[4820]: I0203 13:59:00.782705 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vgdnv/must-gather-s56gm" event={"ID":"677c2b79-9984-4d3d-9aab-c3e3ff13315c","Type":"ContainerDied","Data":"76d44b5a441b5e6ad1372b85b4d76e535308015895ca9c0db156f39b6e498902"} Feb 03 13:59:00 crc kubenswrapper[4820]: I0203 13:59:00.783570 4820 scope.go:117] "RemoveContainer" containerID="76d44b5a441b5e6ad1372b85b4d76e535308015895ca9c0db156f39b6e498902" Feb 03 13:59:00 crc kubenswrapper[4820]: I0203 13:59:00.884529 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vgdnv_must-gather-s56gm_677c2b79-9984-4d3d-9aab-c3e3ff13315c/gather/0.log" Feb 03 13:59:01 crc kubenswrapper[4820]: I0203 13:59:01.956461 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7sbdm"] Feb 03 13:59:01 crc kubenswrapper[4820]: E0203 13:59:01.957513 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df272ff2-4518-42db-b8ed-387750dc77e1" containerName="registry-server" Feb 03 13:59:01 crc kubenswrapper[4820]: I0203 13:59:01.957540 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df272ff2-4518-42db-b8ed-387750dc77e1" containerName="registry-server" Feb 03 13:59:01 crc kubenswrapper[4820]: E0203 13:59:01.957562 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df272ff2-4518-42db-b8ed-387750dc77e1" containerName="extract-content" Feb 03 13:59:01 crc kubenswrapper[4820]: I0203 13:59:01.957570 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df272ff2-4518-42db-b8ed-387750dc77e1" containerName="extract-content" Feb 03 13:59:01 crc kubenswrapper[4820]: E0203 13:59:01.957598 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df272ff2-4518-42db-b8ed-387750dc77e1" containerName="extract-utilities" Feb 03 13:59:01 crc kubenswrapper[4820]: I0203 13:59:01.957625 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df272ff2-4518-42db-b8ed-387750dc77e1" containerName="extract-utilities" Feb 03 13:59:01 crc kubenswrapper[4820]: I0203 13:59:01.957917 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="df272ff2-4518-42db-b8ed-387750dc77e1" containerName="registry-server" Feb 03 13:59:01 crc kubenswrapper[4820]: I0203 13:59:01.959978 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:01 crc kubenswrapper[4820]: I0203 13:59:01.988617 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7sbdm"] Feb 03 13:59:02 crc kubenswrapper[4820]: I0203 13:59:02.106204 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c6b184-684a-4840-8dfb-d73d2c24728e-utilities\") pod \"certified-operators-7sbdm\" (UID: \"d1c6b184-684a-4840-8dfb-d73d2c24728e\") " pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:02 crc kubenswrapper[4820]: I0203 13:59:02.106436 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c6b184-684a-4840-8dfb-d73d2c24728e-catalog-content\") pod \"certified-operators-7sbdm\" (UID: \"d1c6b184-684a-4840-8dfb-d73d2c24728e\") " pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:02 crc kubenswrapper[4820]: I0203 13:59:02.106615 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th8gg\" (UniqueName: \"kubernetes.io/projected/d1c6b184-684a-4840-8dfb-d73d2c24728e-kube-api-access-th8gg\") pod \"certified-operators-7sbdm\" (UID: \"d1c6b184-684a-4840-8dfb-d73d2c24728e\") " pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:02 crc kubenswrapper[4820]: I0203 13:59:02.207545 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c6b184-684a-4840-8dfb-d73d2c24728e-utilities\") pod \"certified-operators-7sbdm\" (UID: \"d1c6b184-684a-4840-8dfb-d73d2c24728e\") " pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:02 crc kubenswrapper[4820]: I0203 13:59:02.207634 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c6b184-684a-4840-8dfb-d73d2c24728e-catalog-content\") pod \"certified-operators-7sbdm\" (UID: \"d1c6b184-684a-4840-8dfb-d73d2c24728e\") " pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:02 crc kubenswrapper[4820]: I0203 13:59:02.207698 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th8gg\" (UniqueName: \"kubernetes.io/projected/d1c6b184-684a-4840-8dfb-d73d2c24728e-kube-api-access-th8gg\") pod \"certified-operators-7sbdm\" (UID: \"d1c6b184-684a-4840-8dfb-d73d2c24728e\") " pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:02 crc kubenswrapper[4820]: I0203 13:59:02.208369 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c6b184-684a-4840-8dfb-d73d2c24728e-utilities\") pod \"certified-operators-7sbdm\" (UID: \"d1c6b184-684a-4840-8dfb-d73d2c24728e\") " pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:02 crc kubenswrapper[4820]: I0203 13:59:02.208436 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c6b184-684a-4840-8dfb-d73d2c24728e-catalog-content\") pod \"certified-operators-7sbdm\" (UID: \"d1c6b184-684a-4840-8dfb-d73d2c24728e\") " pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:02 crc kubenswrapper[4820]: I0203 13:59:02.234736 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th8gg\" (UniqueName: \"kubernetes.io/projected/d1c6b184-684a-4840-8dfb-d73d2c24728e-kube-api-access-th8gg\") pod \"certified-operators-7sbdm\" (UID: \"d1c6b184-684a-4840-8dfb-d73d2c24728e\") " pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:02 crc kubenswrapper[4820]: I0203 13:59:02.283651 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:03 crc kubenswrapper[4820]: I0203 13:59:03.099746 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7sbdm"] Feb 03 13:59:04 crc kubenswrapper[4820]: I0203 13:59:04.067639 4820 generic.go:334] "Generic (PLEG): container finished" podID="d1c6b184-684a-4840-8dfb-d73d2c24728e" containerID="e887e4e21be29b401db2f444b3719a56b89538066f928b7541f1695799616953" exitCode=0 Feb 03 13:59:04 crc kubenswrapper[4820]: I0203 13:59:04.067732 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sbdm" event={"ID":"d1c6b184-684a-4840-8dfb-d73d2c24728e","Type":"ContainerDied","Data":"e887e4e21be29b401db2f444b3719a56b89538066f928b7541f1695799616953"} Feb 03 13:59:04 crc kubenswrapper[4820]: I0203 13:59:04.068222 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sbdm" event={"ID":"d1c6b184-684a-4840-8dfb-d73d2c24728e","Type":"ContainerStarted","Data":"fe6c3dfb0686d18b479884f6540572772569a3663cd2cda30d66b335605cff80"} Feb 03 13:59:04 crc kubenswrapper[4820]: I0203 13:59:04.073098 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:59:05 crc kubenswrapper[4820]: I0203 13:59:05.083313 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sbdm" event={"ID":"d1c6b184-684a-4840-8dfb-d73d2c24728e","Type":"ContainerStarted","Data":"03209fa9830f05ef4b84dc451ec3aa8d3ed3337d0be285ebc452c18a31f08143"} Feb 03 13:59:09 crc kubenswrapper[4820]: I0203 13:59:09.200353 4820 generic.go:334] "Generic (PLEG): container finished" podID="d1c6b184-684a-4840-8dfb-d73d2c24728e" containerID="03209fa9830f05ef4b84dc451ec3aa8d3ed3337d0be285ebc452c18a31f08143" exitCode=0 Feb 03 13:59:09 crc kubenswrapper[4820]: I0203 13:59:09.200421 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sbdm" event={"ID":"d1c6b184-684a-4840-8dfb-d73d2c24728e","Type":"ContainerDied","Data":"03209fa9830f05ef4b84dc451ec3aa8d3ed3337d0be285ebc452c18a31f08143"} Feb 03 13:59:11 crc kubenswrapper[4820]: I0203 13:59:11.227283 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sbdm" event={"ID":"d1c6b184-684a-4840-8dfb-d73d2c24728e","Type":"ContainerStarted","Data":"fb3264d688e006008c7f54517e7d21b0369c6d01918780a23bbe7fd101951bb3"} Feb 03 13:59:11 crc kubenswrapper[4820]: I0203 13:59:11.264452 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7sbdm" podStartSLOduration=3.5865528429999998 podStartE2EDuration="10.264410015s" podCreationTimestamp="2026-02-03 13:59:01 +0000 UTC" firstStartedPulling="2026-02-03 13:59:04.072509638 +0000 UTC m=+6861.595585502" lastFinishedPulling="2026-02-03 13:59:10.75036681 +0000 UTC m=+6868.273442674" observedRunningTime="2026-02-03 13:59:11.252487722 +0000 UTC m=+6868.775563586" watchObservedRunningTime="2026-02-03 13:59:11.264410015 +0000 UTC m=+6868.787485879" Feb 03 13:59:12 crc kubenswrapper[4820]: I0203 13:59:12.284056 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:12 crc kubenswrapper[4820]: I0203 13:59:12.285251 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:13 crc kubenswrapper[4820]: I0203 13:59:13.368619 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7sbdm" podUID="d1c6b184-684a-4840-8dfb-d73d2c24728e" containerName="registry-server" probeResult="failure" output=< Feb 03 13:59:13 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 03 13:59:13 crc kubenswrapper[4820]: > Feb 03 13:59:16 crc kubenswrapper[4820]: I0203 13:59:16.140352 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vgdnv/must-gather-s56gm"] Feb 03 13:59:16 crc kubenswrapper[4820]: I0203 13:59:16.141003 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vgdnv/must-gather-s56gm" podUID="677c2b79-9984-4d3d-9aab-c3e3ff13315c" containerName="copy" containerID="cri-o://70c59f88be143430e508a1485dcb979f5d71d43facd6b79068076742020866c6" gracePeriod=2 Feb 03 13:59:16 crc kubenswrapper[4820]: I0203 13:59:16.153157 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vgdnv/must-gather-s56gm"] Feb 03 13:59:17 crc kubenswrapper[4820]: I0203 13:59:17.302413 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vgdnv_must-gather-s56gm_677c2b79-9984-4d3d-9aab-c3e3ff13315c/copy/0.log" Feb 03 13:59:17 crc kubenswrapper[4820]: I0203 13:59:17.303599 4820 generic.go:334] "Generic (PLEG): container finished" podID="677c2b79-9984-4d3d-9aab-c3e3ff13315c" containerID="70c59f88be143430e508a1485dcb979f5d71d43facd6b79068076742020866c6" exitCode=143 Feb 03 13:59:18 crc kubenswrapper[4820]: I0203 13:59:18.431475 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vgdnv_must-gather-s56gm_677c2b79-9984-4d3d-9aab-c3e3ff13315c/copy/0.log" Feb 03 13:59:18 crc kubenswrapper[4820]: I0203 13:59:18.433130 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/must-gather-s56gm" Feb 03 13:59:18 crc kubenswrapper[4820]: I0203 13:59:18.448119 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/677c2b79-9984-4d3d-9aab-c3e3ff13315c-must-gather-output\") pod \"677c2b79-9984-4d3d-9aab-c3e3ff13315c\" (UID: \"677c2b79-9984-4d3d-9aab-c3e3ff13315c\") " Feb 03 13:59:18 crc kubenswrapper[4820]: I0203 13:59:18.448210 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4kbt\" (UniqueName: \"kubernetes.io/projected/677c2b79-9984-4d3d-9aab-c3e3ff13315c-kube-api-access-c4kbt\") pod \"677c2b79-9984-4d3d-9aab-c3e3ff13315c\" (UID: \"677c2b79-9984-4d3d-9aab-c3e3ff13315c\") " Feb 03 13:59:18 crc kubenswrapper[4820]: I0203 13:59:18.458198 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/677c2b79-9984-4d3d-9aab-c3e3ff13315c-kube-api-access-c4kbt" (OuterVolumeSpecName: "kube-api-access-c4kbt") pod "677c2b79-9984-4d3d-9aab-c3e3ff13315c" (UID: "677c2b79-9984-4d3d-9aab-c3e3ff13315c"). InnerVolumeSpecName "kube-api-access-c4kbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:59:18 crc kubenswrapper[4820]: I0203 13:59:18.550277 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4kbt\" (UniqueName: \"kubernetes.io/projected/677c2b79-9984-4d3d-9aab-c3e3ff13315c-kube-api-access-c4kbt\") on node \"crc\" DevicePath \"\"" Feb 03 13:59:18 crc kubenswrapper[4820]: I0203 13:59:18.641250 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/677c2b79-9984-4d3d-9aab-c3e3ff13315c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "677c2b79-9984-4d3d-9aab-c3e3ff13315c" (UID: "677c2b79-9984-4d3d-9aab-c3e3ff13315c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:59:18 crc kubenswrapper[4820]: I0203 13:59:18.652497 4820 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/677c2b79-9984-4d3d-9aab-c3e3ff13315c-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 03 13:59:19 crc kubenswrapper[4820]: I0203 13:59:19.156077 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="677c2b79-9984-4d3d-9aab-c3e3ff13315c" path="/var/lib/kubelet/pods/677c2b79-9984-4d3d-9aab-c3e3ff13315c/volumes" Feb 03 13:59:19 crc kubenswrapper[4820]: I0203 13:59:19.329560 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vgdnv_must-gather-s56gm_677c2b79-9984-4d3d-9aab-c3e3ff13315c/copy/0.log" Feb 03 13:59:19 crc kubenswrapper[4820]: I0203 13:59:19.330048 4820 scope.go:117] "RemoveContainer" containerID="70c59f88be143430e508a1485dcb979f5d71d43facd6b79068076742020866c6" Feb 03 13:59:19 crc kubenswrapper[4820]: I0203 13:59:19.330234 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vgdnv/must-gather-s56gm" Feb 03 13:59:19 crc kubenswrapper[4820]: I0203 13:59:19.361149 4820 scope.go:117] "RemoveContainer" containerID="76d44b5a441b5e6ad1372b85b4d76e535308015895ca9c0db156f39b6e498902" Feb 03 13:59:22 crc kubenswrapper[4820]: I0203 13:59:22.332162 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:22 crc kubenswrapper[4820]: I0203 13:59:22.386625 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:22 crc kubenswrapper[4820]: I0203 13:59:22.573744 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7sbdm"] Feb 03 13:59:23 crc kubenswrapper[4820]: I0203 13:59:23.373335 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7sbdm" podUID="d1c6b184-684a-4840-8dfb-d73d2c24728e" containerName="registry-server" containerID="cri-o://fb3264d688e006008c7f54517e7d21b0369c6d01918780a23bbe7fd101951bb3" gracePeriod=2 Feb 03 13:59:24 crc kubenswrapper[4820]: I0203 13:59:24.406575 4820 generic.go:334] "Generic (PLEG): container finished" podID="d1c6b184-684a-4840-8dfb-d73d2c24728e" containerID="fb3264d688e006008c7f54517e7d21b0369c6d01918780a23bbe7fd101951bb3" exitCode=0 Feb 03 13:59:24 crc kubenswrapper[4820]: I0203 13:59:24.406881 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sbdm" event={"ID":"d1c6b184-684a-4840-8dfb-d73d2c24728e","Type":"ContainerDied","Data":"fb3264d688e006008c7f54517e7d21b0369c6d01918780a23bbe7fd101951bb3"} Feb 03 13:59:24 crc kubenswrapper[4820]: I0203 13:59:24.607547 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:24 crc kubenswrapper[4820]: I0203 13:59:24.776143 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th8gg\" (UniqueName: \"kubernetes.io/projected/d1c6b184-684a-4840-8dfb-d73d2c24728e-kube-api-access-th8gg\") pod \"d1c6b184-684a-4840-8dfb-d73d2c24728e\" (UID: \"d1c6b184-684a-4840-8dfb-d73d2c24728e\") " Feb 03 13:59:24 crc kubenswrapper[4820]: I0203 13:59:24.776314 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c6b184-684a-4840-8dfb-d73d2c24728e-catalog-content\") pod \"d1c6b184-684a-4840-8dfb-d73d2c24728e\" (UID: \"d1c6b184-684a-4840-8dfb-d73d2c24728e\") " Feb 03 13:59:24 crc kubenswrapper[4820]: I0203 13:59:24.785208 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c6b184-684a-4840-8dfb-d73d2c24728e-utilities\") pod \"d1c6b184-684a-4840-8dfb-d73d2c24728e\" (UID: \"d1c6b184-684a-4840-8dfb-d73d2c24728e\") " Feb 03 13:59:24 crc kubenswrapper[4820]: I0203 13:59:24.786287 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c6b184-684a-4840-8dfb-d73d2c24728e-utilities" (OuterVolumeSpecName: "utilities") pod "d1c6b184-684a-4840-8dfb-d73d2c24728e" (UID: "d1c6b184-684a-4840-8dfb-d73d2c24728e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:59:24 crc kubenswrapper[4820]: I0203 13:59:24.786716 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c6b184-684a-4840-8dfb-d73d2c24728e-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:59:24 crc kubenswrapper[4820]: I0203 13:59:24.790045 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1c6b184-684a-4840-8dfb-d73d2c24728e-kube-api-access-th8gg" (OuterVolumeSpecName: "kube-api-access-th8gg") pod "d1c6b184-684a-4840-8dfb-d73d2c24728e" (UID: "d1c6b184-684a-4840-8dfb-d73d2c24728e"). InnerVolumeSpecName "kube-api-access-th8gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:59:24 crc kubenswrapper[4820]: I0203 13:59:24.830623 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c6b184-684a-4840-8dfb-d73d2c24728e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1c6b184-684a-4840-8dfb-d73d2c24728e" (UID: "d1c6b184-684a-4840-8dfb-d73d2c24728e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:59:24 crc kubenswrapper[4820]: I0203 13:59:24.889083 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th8gg\" (UniqueName: \"kubernetes.io/projected/d1c6b184-684a-4840-8dfb-d73d2c24728e-kube-api-access-th8gg\") on node \"crc\" DevicePath \"\"" Feb 03 13:59:24 crc kubenswrapper[4820]: I0203 13:59:24.889125 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c6b184-684a-4840-8dfb-d73d2c24728e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:59:25 crc kubenswrapper[4820]: I0203 13:59:25.419530 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sbdm" event={"ID":"d1c6b184-684a-4840-8dfb-d73d2c24728e","Type":"ContainerDied","Data":"fe6c3dfb0686d18b479884f6540572772569a3663cd2cda30d66b335605cff80"} Feb 03 13:59:25 crc kubenswrapper[4820]: I0203 13:59:25.419593 4820 scope.go:117] "RemoveContainer" containerID="fb3264d688e006008c7f54517e7d21b0369c6d01918780a23bbe7fd101951bb3" Feb 03 13:59:25 crc kubenswrapper[4820]: I0203 13:59:25.419743 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sbdm" Feb 03 13:59:25 crc kubenswrapper[4820]: I0203 13:59:25.440926 4820 scope.go:117] "RemoveContainer" containerID="03209fa9830f05ef4b84dc451ec3aa8d3ed3337d0be285ebc452c18a31f08143" Feb 03 13:59:25 crc kubenswrapper[4820]: I0203 13:59:25.448103 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7sbdm"] Feb 03 13:59:25 crc kubenswrapper[4820]: I0203 13:59:25.463368 4820 scope.go:117] "RemoveContainer" containerID="e887e4e21be29b401db2f444b3719a56b89538066f928b7541f1695799616953" Feb 03 13:59:25 crc kubenswrapper[4820]: I0203 13:59:25.464203 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7sbdm"] Feb 03 13:59:27 crc kubenswrapper[4820]: I0203 13:59:27.158358 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1c6b184-684a-4840-8dfb-d73d2c24728e" path="/var/lib/kubelet/pods/d1c6b184-684a-4840-8dfb-d73d2c24728e/volumes" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.205928 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b"] Feb 03 14:00:00 crc kubenswrapper[4820]: E0203 14:00:00.207071 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c6b184-684a-4840-8dfb-d73d2c24728e" containerName="extract-content" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.207093 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c6b184-684a-4840-8dfb-d73d2c24728e" containerName="extract-content" Feb 03 14:00:00 crc kubenswrapper[4820]: E0203 14:00:00.207110 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c6b184-684a-4840-8dfb-d73d2c24728e" containerName="extract-utilities" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.207117 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c6b184-684a-4840-8dfb-d73d2c24728e" containerName="extract-utilities" Feb 03 14:00:00 crc kubenswrapper[4820]: E0203 14:00:00.207135 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677c2b79-9984-4d3d-9aab-c3e3ff13315c" containerName="copy" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.207143 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="677c2b79-9984-4d3d-9aab-c3e3ff13315c" containerName="copy" Feb 03 14:00:00 crc kubenswrapper[4820]: E0203 14:00:00.207153 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677c2b79-9984-4d3d-9aab-c3e3ff13315c" containerName="gather" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.207159 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="677c2b79-9984-4d3d-9aab-c3e3ff13315c" containerName="gather" Feb 03 14:00:00 crc kubenswrapper[4820]: E0203 14:00:00.207187 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c6b184-684a-4840-8dfb-d73d2c24728e" containerName="registry-server" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.207194 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c6b184-684a-4840-8dfb-d73d2c24728e" containerName="registry-server" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.207438 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c6b184-684a-4840-8dfb-d73d2c24728e" containerName="registry-server" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.207452 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="677c2b79-9984-4d3d-9aab-c3e3ff13315c" containerName="copy" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.207480 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="677c2b79-9984-4d3d-9aab-c3e3ff13315c" containerName="gather" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.208333 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.212751 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.213043 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.220355 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b"] Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.360475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxpdn\" (UniqueName: \"kubernetes.io/projected/87699c63-54e7-46d8-bcba-cc3d7214e424-kube-api-access-dxpdn\") pod \"collect-profiles-29502120-5699b\" (UID: \"87699c63-54e7-46d8-bcba-cc3d7214e424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.360582 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87699c63-54e7-46d8-bcba-cc3d7214e424-config-volume\") pod \"collect-profiles-29502120-5699b\" (UID: \"87699c63-54e7-46d8-bcba-cc3d7214e424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.360692 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87699c63-54e7-46d8-bcba-cc3d7214e424-secret-volume\") pod \"collect-profiles-29502120-5699b\" (UID: \"87699c63-54e7-46d8-bcba-cc3d7214e424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.463141 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxpdn\" (UniqueName: \"kubernetes.io/projected/87699c63-54e7-46d8-bcba-cc3d7214e424-kube-api-access-dxpdn\") pod \"collect-profiles-29502120-5699b\" (UID: \"87699c63-54e7-46d8-bcba-cc3d7214e424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.463277 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87699c63-54e7-46d8-bcba-cc3d7214e424-config-volume\") pod \"collect-profiles-29502120-5699b\" (UID: \"87699c63-54e7-46d8-bcba-cc3d7214e424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.463405 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87699c63-54e7-46d8-bcba-cc3d7214e424-secret-volume\") pod \"collect-profiles-29502120-5699b\" (UID: \"87699c63-54e7-46d8-bcba-cc3d7214e424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.464301 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87699c63-54e7-46d8-bcba-cc3d7214e424-config-volume\") pod \"collect-profiles-29502120-5699b\" (UID: \"87699c63-54e7-46d8-bcba-cc3d7214e424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.475097 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87699c63-54e7-46d8-bcba-cc3d7214e424-secret-volume\") pod \"collect-profiles-29502120-5699b\" (UID: \"87699c63-54e7-46d8-bcba-cc3d7214e424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.485149 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxpdn\" (UniqueName: \"kubernetes.io/projected/87699c63-54e7-46d8-bcba-cc3d7214e424-kube-api-access-dxpdn\") pod \"collect-profiles-29502120-5699b\" (UID: \"87699c63-54e7-46d8-bcba-cc3d7214e424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" Feb 03 14:00:00 crc kubenswrapper[4820]: I0203 14:00:00.565232 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" Feb 03 14:00:01 crc kubenswrapper[4820]: I0203 14:00:01.075023 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b"] Feb 03 14:00:01 crc kubenswrapper[4820]: I0203 14:00:01.959693 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" event={"ID":"87699c63-54e7-46d8-bcba-cc3d7214e424","Type":"ContainerStarted","Data":"e6996c02dfbc9beaf3e18b7354aeb8dde56914216e5dde66286b2bd07e0f1859"} Feb 03 14:00:01 crc kubenswrapper[4820]: I0203 14:00:01.960042 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" event={"ID":"87699c63-54e7-46d8-bcba-cc3d7214e424","Type":"ContainerStarted","Data":"a1f5729b9aac6cb6de602a17a346cb776ec36775e2c0a1d624e52697b74d9979"} Feb 03 14:00:01 crc kubenswrapper[4820]: I0203 14:00:01.983549 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" podStartSLOduration=1.9834909120000002 podStartE2EDuration="1.983490912s" podCreationTimestamp="2026-02-03 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 14:00:01.9785795 +0000 UTC m=+6919.501655364" watchObservedRunningTime="2026-02-03 14:00:01.983490912 +0000 UTC m=+6919.506566796" Feb 03 14:00:03 crc kubenswrapper[4820]: I0203 14:00:03.987427 4820 generic.go:334] "Generic (PLEG): container finished" podID="87699c63-54e7-46d8-bcba-cc3d7214e424" containerID="e6996c02dfbc9beaf3e18b7354aeb8dde56914216e5dde66286b2bd07e0f1859" exitCode=0 Feb 03 14:00:03 crc kubenswrapper[4820]: I0203 14:00:03.987762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" event={"ID":"87699c63-54e7-46d8-bcba-cc3d7214e424","Type":"ContainerDied","Data":"e6996c02dfbc9beaf3e18b7354aeb8dde56914216e5dde66286b2bd07e0f1859"} Feb 03 14:00:05 crc kubenswrapper[4820]: I0203 14:00:05.494391 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" Feb 03 14:00:05 crc kubenswrapper[4820]: I0203 14:00:05.643653 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87699c63-54e7-46d8-bcba-cc3d7214e424-config-volume\") pod \"87699c63-54e7-46d8-bcba-cc3d7214e424\" (UID: \"87699c63-54e7-46d8-bcba-cc3d7214e424\") " Feb 03 14:00:05 crc kubenswrapper[4820]: I0203 14:00:05.643770 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxpdn\" (UniqueName: \"kubernetes.io/projected/87699c63-54e7-46d8-bcba-cc3d7214e424-kube-api-access-dxpdn\") pod \"87699c63-54e7-46d8-bcba-cc3d7214e424\" (UID: \"87699c63-54e7-46d8-bcba-cc3d7214e424\") " Feb 03 14:00:05 crc kubenswrapper[4820]: I0203 14:00:05.644095 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87699c63-54e7-46d8-bcba-cc3d7214e424-secret-volume\") pod \"87699c63-54e7-46d8-bcba-cc3d7214e424\" (UID: \"87699c63-54e7-46d8-bcba-cc3d7214e424\") " Feb 03 14:00:05 crc kubenswrapper[4820]: I0203 14:00:05.644867 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87699c63-54e7-46d8-bcba-cc3d7214e424-config-volume" (OuterVolumeSpecName: "config-volume") pod "87699c63-54e7-46d8-bcba-cc3d7214e424" (UID: "87699c63-54e7-46d8-bcba-cc3d7214e424"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 14:00:05 crc kubenswrapper[4820]: I0203 14:00:05.651872 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87699c63-54e7-46d8-bcba-cc3d7214e424-kube-api-access-dxpdn" (OuterVolumeSpecName: "kube-api-access-dxpdn") pod "87699c63-54e7-46d8-bcba-cc3d7214e424" (UID: "87699c63-54e7-46d8-bcba-cc3d7214e424"). InnerVolumeSpecName "kube-api-access-dxpdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 14:00:05 crc kubenswrapper[4820]: I0203 14:00:05.652221 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87699c63-54e7-46d8-bcba-cc3d7214e424-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "87699c63-54e7-46d8-bcba-cc3d7214e424" (UID: "87699c63-54e7-46d8-bcba-cc3d7214e424"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 14:00:05 crc kubenswrapper[4820]: I0203 14:00:05.746918 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87699c63-54e7-46d8-bcba-cc3d7214e424-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 14:00:05 crc kubenswrapper[4820]: I0203 14:00:05.746955 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87699c63-54e7-46d8-bcba-cc3d7214e424-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 14:00:05 crc kubenswrapper[4820]: I0203 14:00:05.746967 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxpdn\" (UniqueName: \"kubernetes.io/projected/87699c63-54e7-46d8-bcba-cc3d7214e424-kube-api-access-dxpdn\") on node \"crc\" DevicePath \"\"" Feb 03 14:00:06 crc kubenswrapper[4820]: I0203 14:00:06.007783 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" event={"ID":"87699c63-54e7-46d8-bcba-cc3d7214e424","Type":"ContainerDied","Data":"a1f5729b9aac6cb6de602a17a346cb776ec36775e2c0a1d624e52697b74d9979"} Feb 03 14:00:06 crc kubenswrapper[4820]: I0203 14:00:06.007839 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1f5729b9aac6cb6de602a17a346cb776ec36775e2c0a1d624e52697b74d9979" Feb 03 14:00:06 crc kubenswrapper[4820]: I0203 14:00:06.007915 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-5699b" Feb 03 14:00:06 crc kubenswrapper[4820]: I0203 14:00:06.089174 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466"] Feb 03 14:00:06 crc kubenswrapper[4820]: I0203 14:00:06.100816 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502075-wj466"] Feb 03 14:00:07 crc kubenswrapper[4820]: I0203 14:00:07.155960 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32f686e0-eb63-47b2-8fc5-2acad2c32dab" path="/var/lib/kubelet/pods/32f686e0-eb63-47b2-8fc5-2acad2c32dab/volumes" Feb 03 14:00:31 crc kubenswrapper[4820]: I0203 14:00:31.365306 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 14:00:31 crc kubenswrapper[4820]: I0203 14:00:31.367148 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 14:00:48 crc kubenswrapper[4820]: I0203 14:00:48.314198 4820 scope.go:117] "RemoveContainer" containerID="9ac7d22648e146e47553ea0717456fe6c676c9787f3f0540c88c96a9d3cde8bd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.163857 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29502121-4kkxd"] Feb 03 14:01:00 crc kubenswrapper[4820]: E0203 14:01:00.165484 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87699c63-54e7-46d8-bcba-cc3d7214e424" containerName="collect-profiles" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.165502 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="87699c63-54e7-46d8-bcba-cc3d7214e424" containerName="collect-profiles" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.165731 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="87699c63-54e7-46d8-bcba-cc3d7214e424" containerName="collect-profiles" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.166627 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.176679 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29502121-4kkxd"] Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.327401 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-config-data\") pod \"keystone-cron-29502121-4kkxd\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.327772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlkt5\" (UniqueName: \"kubernetes.io/projected/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-kube-api-access-hlkt5\") pod \"keystone-cron-29502121-4kkxd\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.328095 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-combined-ca-bundle\") pod \"keystone-cron-29502121-4kkxd\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.328329 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-fernet-keys\") pod \"keystone-cron-29502121-4kkxd\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.431959 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-config-data\") pod \"keystone-cron-29502121-4kkxd\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.432180 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlkt5\" (UniqueName: \"kubernetes.io/projected/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-kube-api-access-hlkt5\") pod \"keystone-cron-29502121-4kkxd\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.432280 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-combined-ca-bundle\") pod \"keystone-cron-29502121-4kkxd\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.432396 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-fernet-keys\") pod \"keystone-cron-29502121-4kkxd\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.438292 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-fernet-keys\") pod \"keystone-cron-29502121-4kkxd\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.438940 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-combined-ca-bundle\") pod \"keystone-cron-29502121-4kkxd\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.448330 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-config-data\") pod \"keystone-cron-29502121-4kkxd\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.449025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlkt5\" (UniqueName: \"kubernetes.io/projected/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-kube-api-access-hlkt5\") pod \"keystone-cron-29502121-4kkxd\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.491088 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:00 crc kubenswrapper[4820]: I0203 14:01:00.980095 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29502121-4kkxd"] Feb 03 14:01:00 crc kubenswrapper[4820]: W0203 14:01:00.983022 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9003840_3db4_41ce_87c1_ff67e5b3ea5f.slice/crio-2f92129d080ab3db4881ee8646d93fb5a3b6be37302e2d5fdc92bb6cee1bc50b WatchSource:0}: Error finding container 2f92129d080ab3db4881ee8646d93fb5a3b6be37302e2d5fdc92bb6cee1bc50b: Status 404 returned error can't find the container with id 2f92129d080ab3db4881ee8646d93fb5a3b6be37302e2d5fdc92bb6cee1bc50b Feb 03 14:01:01 crc kubenswrapper[4820]: I0203 14:01:01.034716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502121-4kkxd" event={"ID":"f9003840-3db4-41ce-87c1-ff67e5b3ea5f","Type":"ContainerStarted","Data":"2f92129d080ab3db4881ee8646d93fb5a3b6be37302e2d5fdc92bb6cee1bc50b"} Feb 03 14:01:01 crc kubenswrapper[4820]: I0203 14:01:01.366033 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 14:01:01 crc kubenswrapper[4820]: I0203 14:01:01.366116 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 14:01:02 crc kubenswrapper[4820]: I0203 14:01:02.049179 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502121-4kkxd" event={"ID":"f9003840-3db4-41ce-87c1-ff67e5b3ea5f","Type":"ContainerStarted","Data":"d935ae02de6821b4b1d3678b9d4e6df46302bc1e4787750eacbbd75e1c8fb2a0"} Feb 03 14:01:02 crc kubenswrapper[4820]: I0203 14:01:02.077131 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29502121-4kkxd" podStartSLOduration=2.077103301 podStartE2EDuration="2.077103301s" podCreationTimestamp="2026-02-03 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 14:01:02.070919943 +0000 UTC m=+6979.593995817" watchObservedRunningTime="2026-02-03 14:01:02.077103301 +0000 UTC m=+6979.600179165" Feb 03 14:01:08 crc kubenswrapper[4820]: I0203 14:01:08.110678 4820 generic.go:334] "Generic (PLEG): container finished" podID="f9003840-3db4-41ce-87c1-ff67e5b3ea5f" containerID="d935ae02de6821b4b1d3678b9d4e6df46302bc1e4787750eacbbd75e1c8fb2a0" exitCode=0 Feb 03 14:01:08 crc kubenswrapper[4820]: I0203 14:01:08.110743 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502121-4kkxd" event={"ID":"f9003840-3db4-41ce-87c1-ff67e5b3ea5f","Type":"ContainerDied","Data":"d935ae02de6821b4b1d3678b9d4e6df46302bc1e4787750eacbbd75e1c8fb2a0"} Feb 03 14:01:09 crc kubenswrapper[4820]: I0203 14:01:09.520963 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:09 crc kubenswrapper[4820]: I0203 14:01:09.769426 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-config-data\") pod \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " Feb 03 14:01:09 crc kubenswrapper[4820]: I0203 14:01:09.769632 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-combined-ca-bundle\") pod \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " Feb 03 14:01:09 crc kubenswrapper[4820]: I0203 14:01:09.769738 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlkt5\" (UniqueName: \"kubernetes.io/projected/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-kube-api-access-hlkt5\") pod \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " Feb 03 14:01:09 crc kubenswrapper[4820]: I0203 14:01:09.769797 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-fernet-keys\") pod \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\" (UID: \"f9003840-3db4-41ce-87c1-ff67e5b3ea5f\") " Feb 03 14:01:09 crc kubenswrapper[4820]: I0203 14:01:09.780690 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-kube-api-access-hlkt5" (OuterVolumeSpecName: "kube-api-access-hlkt5") pod "f9003840-3db4-41ce-87c1-ff67e5b3ea5f" (UID: "f9003840-3db4-41ce-87c1-ff67e5b3ea5f"). InnerVolumeSpecName "kube-api-access-hlkt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 14:01:09 crc kubenswrapper[4820]: I0203 14:01:09.798400 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f9003840-3db4-41ce-87c1-ff67e5b3ea5f" (UID: "f9003840-3db4-41ce-87c1-ff67e5b3ea5f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 14:01:09 crc kubenswrapper[4820]: I0203 14:01:09.805115 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9003840-3db4-41ce-87c1-ff67e5b3ea5f" (UID: "f9003840-3db4-41ce-87c1-ff67e5b3ea5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 14:01:09 crc kubenswrapper[4820]: I0203 14:01:09.833574 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-config-data" (OuterVolumeSpecName: "config-data") pod "f9003840-3db4-41ce-87c1-ff67e5b3ea5f" (UID: "f9003840-3db4-41ce-87c1-ff67e5b3ea5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 14:01:09 crc kubenswrapper[4820]: I0203 14:01:09.872775 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 14:01:09 crc kubenswrapper[4820]: I0203 14:01:09.872818 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 14:01:09 crc kubenswrapper[4820]: I0203 14:01:09.872831 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlkt5\" (UniqueName: \"kubernetes.io/projected/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-kube-api-access-hlkt5\") on node \"crc\" DevicePath \"\"" Feb 03 14:01:09 crc kubenswrapper[4820]: I0203 14:01:09.872842 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9003840-3db4-41ce-87c1-ff67e5b3ea5f-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.134278 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502121-4kkxd" event={"ID":"f9003840-3db4-41ce-87c1-ff67e5b3ea5f","Type":"ContainerDied","Data":"2f92129d080ab3db4881ee8646d93fb5a3b6be37302e2d5fdc92bb6cee1bc50b"} Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.134333 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f92129d080ab3db4881ee8646d93fb5a3b6be37302e2d5fdc92bb6cee1bc50b" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.134404 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502121-4kkxd" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.620196 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p96hz"] Feb 03 14:01:10 crc kubenswrapper[4820]: E0203 14:01:10.621199 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9003840-3db4-41ce-87c1-ff67e5b3ea5f" containerName="keystone-cron" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.621235 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9003840-3db4-41ce-87c1-ff67e5b3ea5f" containerName="keystone-cron" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.621598 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9003840-3db4-41ce-87c1-ff67e5b3ea5f" containerName="keystone-cron" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.624290 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.635992 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p96hz"] Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.694339 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6btnq\" (UniqueName: \"kubernetes.io/projected/5e1ae3c0-0d73-4609-a45e-2c30064fba54-kube-api-access-6btnq\") pod \"community-operators-p96hz\" (UID: \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\") " pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.694401 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1ae3c0-0d73-4609-a45e-2c30064fba54-utilities\") pod \"community-operators-p96hz\" (UID: \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\") " pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.694486 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1ae3c0-0d73-4609-a45e-2c30064fba54-catalog-content\") pod \"community-operators-p96hz\" (UID: \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\") " pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.796583 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6btnq\" (UniqueName: \"kubernetes.io/projected/5e1ae3c0-0d73-4609-a45e-2c30064fba54-kube-api-access-6btnq\") pod \"community-operators-p96hz\" (UID: \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\") " pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.796692 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1ae3c0-0d73-4609-a45e-2c30064fba54-utilities\") pod \"community-operators-p96hz\" (UID: \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\") " pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.796865 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1ae3c0-0d73-4609-a45e-2c30064fba54-catalog-content\") pod \"community-operators-p96hz\" (UID: \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\") " pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.797399 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1ae3c0-0d73-4609-a45e-2c30064fba54-utilities\") pod \"community-operators-p96hz\" (UID: \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\") " pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.797442 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1ae3c0-0d73-4609-a45e-2c30064fba54-catalog-content\") pod \"community-operators-p96hz\" (UID: \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\") " pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.817188 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6btnq\" (UniqueName: \"kubernetes.io/projected/5e1ae3c0-0d73-4609-a45e-2c30064fba54-kube-api-access-6btnq\") pod \"community-operators-p96hz\" (UID: \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\") " pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:10 crc kubenswrapper[4820]: I0203 14:01:10.948250 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:11 crc kubenswrapper[4820]: I0203 14:01:11.524276 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p96hz"] Feb 03 14:01:12 crc kubenswrapper[4820]: I0203 14:01:12.170070 4820 generic.go:334] "Generic (PLEG): container finished" podID="5e1ae3c0-0d73-4609-a45e-2c30064fba54" containerID="babc7e27542eee4e972548941bdbac685ecc8eba35e7ff8fceae6bfcc664a257" exitCode=0 Feb 03 14:01:12 crc kubenswrapper[4820]: I0203 14:01:12.170400 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p96hz" event={"ID":"5e1ae3c0-0d73-4609-a45e-2c30064fba54","Type":"ContainerDied","Data":"babc7e27542eee4e972548941bdbac685ecc8eba35e7ff8fceae6bfcc664a257"} Feb 03 14:01:12 crc kubenswrapper[4820]: I0203 14:01:12.170440 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p96hz" event={"ID":"5e1ae3c0-0d73-4609-a45e-2c30064fba54","Type":"ContainerStarted","Data":"cd64d95a6237899d98c020356af0dacd3042c65ce805b1ab7d8f309f353bf26b"} Feb 03 14:01:14 crc kubenswrapper[4820]: I0203 14:01:14.334641 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p96hz" event={"ID":"5e1ae3c0-0d73-4609-a45e-2c30064fba54","Type":"ContainerStarted","Data":"bc25e7656033d50d8f13a8c90fa67d605084134140afb20ace72287d5212e7c5"} Feb 03 14:01:16 crc kubenswrapper[4820]: I0203 14:01:16.358923 4820 generic.go:334] "Generic (PLEG): container finished" podID="5e1ae3c0-0d73-4609-a45e-2c30064fba54" containerID="bc25e7656033d50d8f13a8c90fa67d605084134140afb20ace72287d5212e7c5" exitCode=0 Feb 03 14:01:16 crc kubenswrapper[4820]: I0203 14:01:16.359016 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p96hz" event={"ID":"5e1ae3c0-0d73-4609-a45e-2c30064fba54","Type":"ContainerDied","Data":"bc25e7656033d50d8f13a8c90fa67d605084134140afb20ace72287d5212e7c5"} Feb 03 14:01:18 crc kubenswrapper[4820]: I0203 14:01:18.382488 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p96hz" event={"ID":"5e1ae3c0-0d73-4609-a45e-2c30064fba54","Type":"ContainerStarted","Data":"293c1762e54218d31eaed97de06586efadcd44c32360b25e48a37b8d66b8b732"} Feb 03 14:01:18 crc kubenswrapper[4820]: I0203 14:01:18.412036 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p96hz" podStartSLOduration=3.091176782 podStartE2EDuration="8.411998519s" podCreationTimestamp="2026-02-03 14:01:10 +0000 UTC" firstStartedPulling="2026-02-03 14:01:12.173098721 +0000 UTC m=+6989.696174585" lastFinishedPulling="2026-02-03 14:01:17.493920458 +0000 UTC m=+6995.016996322" observedRunningTime="2026-02-03 14:01:18.405282738 +0000 UTC m=+6995.928358602" watchObservedRunningTime="2026-02-03 14:01:18.411998519 +0000 UTC m=+6995.935074383" Feb 03 14:01:20 crc kubenswrapper[4820]: I0203 14:01:20.948709 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:20 crc kubenswrapper[4820]: I0203 14:01:20.949396 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:21 crc kubenswrapper[4820]: I0203 14:01:21.017519 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:30 crc kubenswrapper[4820]: I0203 14:01:30.998821 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:31 crc kubenswrapper[4820]: I0203 14:01:31.365347 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 14:01:31 crc kubenswrapper[4820]: I0203 14:01:31.365444 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 14:01:31 crc kubenswrapper[4820]: I0203 14:01:31.365515 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" Feb 03 14:01:31 crc kubenswrapper[4820]: I0203 14:01:31.366645 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"70b78a0088511719d3b095cbe99ce0352bc07c00f13e90bb5b6511410838e13f"} pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 14:01:31 crc kubenswrapper[4820]: I0203 14:01:31.366728 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" containerID="cri-o://70b78a0088511719d3b095cbe99ce0352bc07c00f13e90bb5b6511410838e13f" gracePeriod=600 Feb 03 14:01:31 crc kubenswrapper[4820]: I0203 14:01:31.537793 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerID="70b78a0088511719d3b095cbe99ce0352bc07c00f13e90bb5b6511410838e13f" exitCode=0 Feb 03 14:01:31 crc kubenswrapper[4820]: I0203 14:01:31.537841 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerDied","Data":"70b78a0088511719d3b095cbe99ce0352bc07c00f13e90bb5b6511410838e13f"} Feb 03 14:01:31 crc kubenswrapper[4820]: I0203 14:01:31.537878 4820 scope.go:117] "RemoveContainer" containerID="00fe136b4d1378c44d37871bea8a6e5ad65e410eca507a4e17eba65954b38a9e" Feb 03 14:01:32 crc kubenswrapper[4820]: I0203 14:01:32.545078 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p96hz"] Feb 03 14:01:32 crc kubenswrapper[4820]: I0203 14:01:32.545857 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p96hz" podUID="5e1ae3c0-0d73-4609-a45e-2c30064fba54" containerName="registry-server" containerID="cri-o://293c1762e54218d31eaed97de06586efadcd44c32360b25e48a37b8d66b8b732" gracePeriod=2 Feb 03 14:01:32 crc kubenswrapper[4820]: I0203 14:01:32.551459 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" event={"ID":"2c02def6-29f2-448e-80ec-0c8ee290f053","Type":"ContainerStarted","Data":"377e6f1f3825047a08f1fe4beaa2e33cdace715b0af724f69a47d1207ee7f8d5"} Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.159388 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.265480 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1ae3c0-0d73-4609-a45e-2c30064fba54-catalog-content\") pod \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\" (UID: \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\") " Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.265882 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1ae3c0-0d73-4609-a45e-2c30064fba54-utilities\") pod \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\" (UID: \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\") " Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.266043 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6btnq\" (UniqueName: \"kubernetes.io/projected/5e1ae3c0-0d73-4609-a45e-2c30064fba54-kube-api-access-6btnq\") pod \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\" (UID: \"5e1ae3c0-0d73-4609-a45e-2c30064fba54\") " Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.267125 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e1ae3c0-0d73-4609-a45e-2c30064fba54-utilities" (OuterVolumeSpecName: "utilities") pod "5e1ae3c0-0d73-4609-a45e-2c30064fba54" (UID: "5e1ae3c0-0d73-4609-a45e-2c30064fba54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.274552 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e1ae3c0-0d73-4609-a45e-2c30064fba54-kube-api-access-6btnq" (OuterVolumeSpecName: "kube-api-access-6btnq") pod "5e1ae3c0-0d73-4609-a45e-2c30064fba54" (UID: "5e1ae3c0-0d73-4609-a45e-2c30064fba54"). InnerVolumeSpecName "kube-api-access-6btnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.344471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e1ae3c0-0d73-4609-a45e-2c30064fba54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e1ae3c0-0d73-4609-a45e-2c30064fba54" (UID: "5e1ae3c0-0d73-4609-a45e-2c30064fba54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.368985 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6btnq\" (UniqueName: \"kubernetes.io/projected/5e1ae3c0-0d73-4609-a45e-2c30064fba54-kube-api-access-6btnq\") on node \"crc\" DevicePath \"\"" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.369030 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1ae3c0-0d73-4609-a45e-2c30064fba54-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.369045 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1ae3c0-0d73-4609-a45e-2c30064fba54-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.564721 4820 generic.go:334] "Generic (PLEG): container finished" podID="5e1ae3c0-0d73-4609-a45e-2c30064fba54" containerID="293c1762e54218d31eaed97de06586efadcd44c32360b25e48a37b8d66b8b732" exitCode=0 Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.565094 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p96hz" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.565909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p96hz" event={"ID":"5e1ae3c0-0d73-4609-a45e-2c30064fba54","Type":"ContainerDied","Data":"293c1762e54218d31eaed97de06586efadcd44c32360b25e48a37b8d66b8b732"} Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.565947 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p96hz" event={"ID":"5e1ae3c0-0d73-4609-a45e-2c30064fba54","Type":"ContainerDied","Data":"cd64d95a6237899d98c020356af0dacd3042c65ce805b1ab7d8f309f353bf26b"} Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.565965 4820 scope.go:117] "RemoveContainer" containerID="293c1762e54218d31eaed97de06586efadcd44c32360b25e48a37b8d66b8b732" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.594635 4820 scope.go:117] "RemoveContainer" containerID="bc25e7656033d50d8f13a8c90fa67d605084134140afb20ace72287d5212e7c5" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.609574 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p96hz"] Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.616645 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p96hz"] Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.621549 4820 scope.go:117] "RemoveContainer" containerID="babc7e27542eee4e972548941bdbac685ecc8eba35e7ff8fceae6bfcc664a257" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.678087 4820 scope.go:117] "RemoveContainer" containerID="293c1762e54218d31eaed97de06586efadcd44c32360b25e48a37b8d66b8b732" Feb 03 14:01:33 crc kubenswrapper[4820]: E0203 14:01:33.678733 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"293c1762e54218d31eaed97de06586efadcd44c32360b25e48a37b8d66b8b732\": container with ID starting with 293c1762e54218d31eaed97de06586efadcd44c32360b25e48a37b8d66b8b732 not found: ID does not exist" containerID="293c1762e54218d31eaed97de06586efadcd44c32360b25e48a37b8d66b8b732" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.678804 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"293c1762e54218d31eaed97de06586efadcd44c32360b25e48a37b8d66b8b732"} err="failed to get container status \"293c1762e54218d31eaed97de06586efadcd44c32360b25e48a37b8d66b8b732\": rpc error: code = NotFound desc = could not find container \"293c1762e54218d31eaed97de06586efadcd44c32360b25e48a37b8d66b8b732\": container with ID starting with 293c1762e54218d31eaed97de06586efadcd44c32360b25e48a37b8d66b8b732 not found: ID does not exist" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.678840 4820 scope.go:117] "RemoveContainer" containerID="bc25e7656033d50d8f13a8c90fa67d605084134140afb20ace72287d5212e7c5" Feb 03 14:01:33 crc kubenswrapper[4820]: E0203 14:01:33.679685 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc25e7656033d50d8f13a8c90fa67d605084134140afb20ace72287d5212e7c5\": container with ID starting with bc25e7656033d50d8f13a8c90fa67d605084134140afb20ace72287d5212e7c5 not found: ID does not exist" containerID="bc25e7656033d50d8f13a8c90fa67d605084134140afb20ace72287d5212e7c5" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.679717 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc25e7656033d50d8f13a8c90fa67d605084134140afb20ace72287d5212e7c5"} err="failed to get container status \"bc25e7656033d50d8f13a8c90fa67d605084134140afb20ace72287d5212e7c5\": rpc error: code = NotFound desc = could not find container \"bc25e7656033d50d8f13a8c90fa67d605084134140afb20ace72287d5212e7c5\": container with ID starting with bc25e7656033d50d8f13a8c90fa67d605084134140afb20ace72287d5212e7c5 not found: ID does not exist" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.679739 4820 scope.go:117] "RemoveContainer" containerID="babc7e27542eee4e972548941bdbac685ecc8eba35e7ff8fceae6bfcc664a257" Feb 03 14:01:33 crc kubenswrapper[4820]: E0203 14:01:33.680194 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"babc7e27542eee4e972548941bdbac685ecc8eba35e7ff8fceae6bfcc664a257\": container with ID starting with babc7e27542eee4e972548941bdbac685ecc8eba35e7ff8fceae6bfcc664a257 not found: ID does not exist" containerID="babc7e27542eee4e972548941bdbac685ecc8eba35e7ff8fceae6bfcc664a257" Feb 03 14:01:33 crc kubenswrapper[4820]: I0203 14:01:33.680281 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"babc7e27542eee4e972548941bdbac685ecc8eba35e7ff8fceae6bfcc664a257"} err="failed to get container status \"babc7e27542eee4e972548941bdbac685ecc8eba35e7ff8fceae6bfcc664a257\": rpc error: code = NotFound desc = could not find container \"babc7e27542eee4e972548941bdbac685ecc8eba35e7ff8fceae6bfcc664a257\": container with ID starting with babc7e27542eee4e972548941bdbac685ecc8eba35e7ff8fceae6bfcc664a257 not found: ID does not exist" Feb 03 14:01:35 crc kubenswrapper[4820]: I0203 14:01:35.158460 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e1ae3c0-0d73-4609-a45e-2c30064fba54" path="/var/lib/kubelet/pods/5e1ae3c0-0d73-4609-a45e-2c30064fba54/volumes" Feb 03 14:03:31 crc kubenswrapper[4820]: I0203 14:03:31.366272 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qj7xr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 14:03:31 crc kubenswrapper[4820]: I0203 14:03:31.367060 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qj7xr" podUID="2c02def6-29f2-448e-80ec-0c8ee290f053" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515140400123024433 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015140400123017350 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015140361546016512 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015140361547015463 5ustar corecore